The Troubadour Podcast

Exploring the Future of Artificial General Intelligence with Peter Voss

August 31, 2024 Kirk j Barbera

Send us a text

Can artificial intelligence truly replicate human consciousness and creativity? Join us on the Troubadour channel as we sit down with Peter Voss, a trailblazer in the field of artificial intelligence, to explore this profound question. Peter takes us through his captivating journey from electronics engineering to becoming a key figure in Artificial General Intelligence (AGI), revealing the crucial lessons learned from the early days of AI research. We dissect the nuances between narrow AI and AGI, shedding light on the formidable challenges AI pioneers faced and the visionary potential of AGI to transform industries and human life.

We uncover the current limitations of AI and the tantalizing prospects for AGI. From the impressive yet constrained capabilities of large language models like ChatGPT to the dream of AGI with human-like autonomy and learning, Peter offers a candid assessment of where we stand and where we could go. Imagine AGI not just as a tool, but as an entity capable of revolutionizing fields such as biomedical research and space travel, driving progress with reasoning and efficiency that surpass human limits. We also ponder the implications of AGI on employment, governance, and the very nature of human creativity and consciousness.

Finally, we envision a future where AGI might create art akin to human geniuses and foster human flourishing on an unprecedented scale. Peter shares his company's ambitious goal of achieving AGI within five years, emphasizing the critical role of cognitive science. We conclude with a creative twist, discussing AGI's potential to compose poetry in the style of Wordsworth. With Peter's insights and a hopeful outlook, this episode is a deep dive into the transformative potential of artificial intelligence. Tune in for a thought-provoking conversation that promises to leave you both informed and inspired about the future of AI.

Speaker 1:

In terms of objectivist principles. An AGI, I believe, would come up with very much objectivist principles Without Ayn Rand.

Speaker 2:

Yes, yeah, see, that's why I don't see that at all, I guess. Yeah, absolutely, but how? Peter Voss, welcome to the Troubadour channel. I wanted to bring you on to talk about something that I have started to fall in love with over the last couple years. It's become a big part of my life in a new way and I think I have an interesting approach to understanding this as someone who is completely ignorant of it, and that is artificial intelligence and, specifically with you, artificial general intelligence. So I want to bring you on, but I first want to read a quick little bio of who you are so people get a sense of who you are.

Speaker 2:

So you're one of the pioneers in artificial intelligence. You and a few colleagues coined the term artificial general intelligence, agi. 20 years you've been studying what intelligence is, how it develops in humans and the current state of AI. This research culminated in the creation of our natural this is at your company Aigo A-I-G-O, of our natural language intelligence engine that can think, learn and reason and adapt to and grow, with the user Currently focusing on commercializing the second generation of our most advanced conversational AI technology. That's Aigo A-I-G-O, and you can visit A-I-G-O dot A-I for lots of blogs and white papers on this there Now. Aigo is the most advanced natural language interaction platform built on human brain-like cognitive architecture. I'm very interested in that. Aigoai is on a mission to provide highly intelligent and hyper-personalized assistance for everyone. Okay, so let's dig into that a little bit. I just want to get started with you and your background more and how you got into AI, its origins for you and moving into AGI. Yes, sure.

Speaker 1:

So I started out as an electronics engineer and started my own electronics industrial electronics company and then I fell in love with software and my company very quickly turned into a software company. I developed an ERP software package and that was quite successful. We went from the garage to 400 people and did an IPO. So that was exciting. But it's. When I exited that company I sort of thought well, you know what is missing in software? And I realized that software really doesn't have intelligence by itself. You know, it doesn't have common sense, it can't learn, it can't adjust, adapt to new circumstances. So if the programmer doesn't think of some scenario, it'll just crash or give you an error message or do something silly. So that really started me on the mission of how can we build systems that can think and learn and reason the way humans do?

Speaker 2:

Okay, so you started with, so you've been in software a long time, did ERP?

Speaker 2:

I actually sold ERP for a very short period of time. I sold NetSuite, and so in our conversation today I'm really hoping we can dig into this from a different approach. Like I have no technical background, I have no software, except for selling it a little bit. I don't have training in computer science, anything. I'm purely interested in the this thing keeps falling down. Purely interested in the human consciousness, education, literature.

Speaker 2:

So at the Troubadour channel we do something called the Literary Canon Club where we read through the canonical works of great literature, from Homer to Melville. You know Rand although I leave Rand for the people at the Ayn Rand Institute, but that's so. I'm coming at it from poetry, from literature, from art and what they teach us and also what you know I think they teach us about the mind and about life and human existence and things like that. And so I'm interested in the claims that you and other AI and AGI people have made, are making, and how those claims have changed. And that's just my own personal interest in it.

Speaker 2:

And so first I wanted to get started with the distinguishing factors and ideas behind AGI and AI, just to kind of set the context of what is AGI, what is AI. So now the world was introduced to AI really, I think in a real way with chat GPT. I use chat GPT, I have chat GPT right here with questions that you know it all the time and to me it's a great tool. It's another wonderful tool and my perspective is that's pretty much all AI will ever be is a tool. But I feel like AGI and this human consciousness may have a different perspective, that it may be able to do what humans do.

Speaker 1:

Yes, actually the idea was really to build thinking machines, machines that can think, learn and reason the way humans do and would be able to do purposeful things, achieve goals and so on. Now, they originally thought they could crack this problem within a year or two.

Speaker 2:

Oh really, yeah, All right, you got to like the cockiness.

Speaker 1:

And, of course, it turned out to be much, much, much harder. Yeah. So what happened? The field of AI really lost its vision over the decades, and the field of AI turned into narrow AI, where you take one particular problem that seems to require intelligence, such as playing chess or container optimization or something of that nature. But it's in the human intelligence that actually solves the problem, it's not the intelligence in the system per se. So take, for example, Deep Blue, the IBM world champion chess playing machine.

Speaker 1:

It's really the ingenuity and the intelligence of the programmers to figure out how to use a computer to play a good game of chess using, you know, the brute processing power of computers. So the field of AI really has been, and still is, to a large extent, narrow AI, yes, and so when I started becoming interested in AI, I realized this and was quite shocking, actually. So around about 2001, I found some other people who were also interested in going back to the original ideal, the original goal of AI to build thinking machines. And in 2002, we decided to write a book on this endeavor, and so three of us came up with the term artificial general intelligence, and to me, I particularly like the G in there, which is like in psychology. You have little g which stands as a measure of intelligence, iq or general intelligence, which stands as a measure of intelligence, iq or general intelligence.

Speaker 1:

So you really want to have a computer system that can learn anything really that a human can learn or more, so that hasn't been specifically built to solve one problem or one set of problems, but that can hit the books, can think about, can reason, can learn by itself and innovate and come up with new ideas, you know, can be a researcher or whatever. You know really any human cognitive task. That it has the qualities that make human intelligence so special the ability to form abstractions, the ability to learn to apply knowledge you learned at one domain, to apply that to another domain, to have metacognition, to be able to think about your own thought process. So that is basically what AGI is about so that is basically what AGI is about.

Speaker 2:

Okay, so I want to hone in on a couple things you said to help me understand this. Again, I'm coming at it from ignorant of the science and things of that nature, but I'm, you know. So you said something like the research it allow. Agi should be able to do research the way that a human does, and to me I think that's already kind of happening with AI. Like the chat, GPT models, the LLMs are able to do not what a human can do in creativity, but in the processing of massive torrents of information.

Speaker 2:

I do think it does it already and it seems like it's, you know, it's been around for a long time, the concept. But this breakthrough in LLMs over the last couple years seems to be putting us in that process. So you know it's able, for instance, when researchers in the medical science community are doing research on Alzheimer's, and you know, if you're a human, you're reading through books, you're typing notes, you're doing, but if you have an LLM to help you out, you can process torrents, massive quantities of information you're not able to do before, and so while the LLM is not exactly doing the research necessarily, it kind of is, in the sense of it's like the most advanced research assistant in human history, and so I'm just curious what you think about that idea of you. Know where we are right now, let's just call it and the quality and appreciating that Like. Do you agree with that?

Speaker 1:

No, I think there's a very important distinction to be made, and I know a lot of people say, well, surely we already have AI that does research. But that's not at all the case. It's a tool that humans use. A human always has to be in a loop to basically tell it what to do, what data to feed in, and that's very different from a researcher who has a goal in mind and can then pursue that goal autonomously, basically. And LLMs cannot do that now and they will not be able to do that by the very nature of the way they're built.

Speaker 1:

And one of the important limitations of large language models and statistical approaches, big data approaches, is that they are pre-trained. In fact, let me just talk about you know it's called GPT, the G stands for generative. It makes up stuff, literally makes up stuff. Now, that's not good for a researcher. It's been trained on 10 trillion pieces of information good, bad and ugly and it doesn't actually have a way to evaluate what of that information is good and bad and ugly. It can't think about its own kind of processes. You have to do that. The human has to do that.

Speaker 2:

Yeah, it's a tool.

Speaker 1:

It's generative, it's a tool, yeah, yes. And then the P it's pre-trained, it's been trained $100 billion, $200 billion worth of training. But it's a read-only model that basically the model itself does not update, does not learn, does not adapt, approve. So you know, if you train the human assistant with something and they knew about, you know scientific research at a certain cutoff point and you try to put them in a research lab but they couldn't learn anything new, it wouldn't be very useful, yes, wouldn't be very useful. So the fact that they cannot learn in real time or they cannot update their model in real time is a real handicap.

Speaker 2:

But humans can yeah right. So what do we need the AGI for? Because it's just more tools for humans.

Speaker 1:

Well, okay, that's a separate question of why do we want AGI.

Speaker 2:

Yeah, because what you're saying is, basically, it sounds like you're making an argument for creating a human, basically Because humans do exactly what you're saying in terms of and now we have so. For instance, thousands of years ago, a big invention happened the written word and that allowed us to codify our knowledge and be able to look at it in a space right in front of us and to objectively be able to evaluate all torrents of things, rather than trying to have just our memory of the concepts we could have. So to me, chatgpt is just another one of those, on a much better scale maybe, and I mean AI. So I'm still struggling to understand what AGI would offer.

Speaker 1:

Okay, let me talk about that because it is profound and I think a lot of people have made the comment. The invention of AGI will be bigger than electricity and you know really any invention that humans have ever come up with. Let me explain. I could give kind of three areas of where AGI will make a profound difference. The first one in research. Imagine you have one AGI that trains itself to become a PhD-level cancer researcher, just for argument's sake. You can now make a million copies of that AI. You have a million PhD-level researchers chipping away at the problem, sharing information, pursuing different ideas, communicating much more effectively than humans, working 24-7 without getting tired, with photographic memory, instantaneous access to all the information in the world and being much better at reasoning than humans are. We're not actually very good at logical thinking.

Speaker 2:

Oh yeah, that's true, it's an evolutionary afterthought. Yeah exactly.

Speaker 1:

You know so the amount of progress we'll make. I mean, when I was a kid it was we'll cure cancer in 10 years.

Speaker 2:

Yeah.

Speaker 1:

You know, and we're still 10 years away from it, we need a lot more intelligence to be brought to these difficult problems that we're facing. And you know that's true for other diseases. No-transcript, yeah, yeah, the human mind, I agree, it's the ultimate value. Yeah, the human mind, I agree, is the ultimate resource and basically, having machines that have abundance of rationality will foster human flourishing. So research is the one area, but I'll talk about two others to illustrate.

Speaker 1:

The second one is, of course, the dramatic reduction in the cost of goods and services. If you can imagine any desk-bound job that you have and let's say cognitive, let's leave. Creative, artistic, let's leave them, put them one side. But any cognitive task that's desk-bound you can replace with AGI. You could replace it with, say, a $10,000 machine. Now, the massive reduction in costs of goods and services and the radical abundance that will bring with it is enormous. The third category I talk about is really for each individual, for each human. We call that a personal, personal, personal assistant, and the reason we have three personals is like three different meanings of the word personal. The first, personal, is that in my view, these personal assistants, you should own it and it should serve your agenda and not some mega corporations. I mean right now the sort of simple assistants we have, like Siri and Alexa and Google. They don't really serve your agenda.

Speaker 2:

Yeah, they have their own purpose and goals.

Speaker 1:

Well, by the company that's providing it.

Speaker 2:

That's what I'm saying, yeah, and they have to provide a value for you though, too, otherwise you won't use it Right, but they do have their own goals to make money, basically.

Speaker 1:

Yeah, ultimately, I mean, you are the product in most cases, because you don't pay for these things For Siri.

Speaker 2:

Yeah, well, you pay for the phone. Yeah, but not especially for Siri, okay, but I mean, I guess it's like I don't pay for the chairs when I go to a restaurant, but I pay to eat and I hope to sit down. It's still part of what I'm paying. It's the value they're going, or when I go to a nice restaurant versus a Burger King. You're paying for the upgraded service.

Speaker 1:

Siri. Maybe is a little different there in terms of Alexa and Google Assistant. I mean, they use the information. There you are the product. I don't know to what extent Apple uses.

Speaker 2:

Yeah, I mean, this is a whole other argument. To get back to the value of having a personal assistant. If you have an assistant that serves you which I agree with this A personal assistant. I love the idea of that, yeah For sure, being just about me with my agenda directed by me.

Speaker 1:

Yeah, and the second personal is a hyper, personalized to you. Yes. That learns your history, your goals, you know and so on. And the third, personal, is a one of privacy that you also, you control what it shares with whom and you control what it shares with whom. And so this personal personal assistant cannot only do the dirty work for you, like dealing with insurance company and stuff like that?

Speaker 2:

That'd be great, yeah, dmv stuff.

Speaker 1:

But, more importantly, it'll actually be like a little angel on your shoulder that can help you think things through and make better decisions in life. In life you might have made a decision that you regret because you didn't spend enough time thinking about it, or you weren't clear enough, or you made an emotional decision, or you didn't do enough research. But, having this personal personal assistant, you'll be able to make better decisions in life and again encourage human flourishing. So I mean, these are just three examples of the value of AGI. It's, you know, significant.

Speaker 2:

Yeah, I guess where I'm still trying to figure out is, you know, as AGI, ai you know, let me in my mind AGI as a tool that enhances what solves some of the problems AI currently has. So I think you identified and to that degree I like it a lot right To solving problems. I think you identified three problems with today's generative AI systems and this is, I believe, in one of your blogs or papers or white papers. So the propensity to hallucinate, the inability to update core model incrementally in real time and the constant need for a human in the loop. So to the degree that we can decrease that and or eliminate, that's great, I think good. Now, the reason I say I think good is because I know enough about human creativity and thinking that sometimes hallucinations lead to results.

Speaker 2:

Sometimes you now I think hallucination in this sense is probably a little different than hallucination in the human sense. But you know just the idea of you're kind of daydreaming and then something hits and all of a sudden you're integrating things that you didn't integrate before. And there's there's. To me, the human mind and human consciousness is much a maze than any kind of straight line and it's back and forth and up and down and left and right, and yellow, and depressed, and happy and set. It's all that integrated into you know that miasma. It's like an ocean of just stuff, and that's where creativity comes from, that's where new buildings come from, new scientific models.

Speaker 2:

Let's integrate, you know, biology with chemistry and biochemistry. Who would have ever thought of you know like, and how do we do that? That's a new integrate. And and that's just the sciences, let alone humanity, where I think we have barely scratched the surface of what human consciousness really is. I don't think psychologists really understand it, I don't think we. I think the poets were the best and closest at it, and they still. I think we're not, you know.

Speaker 1:

Well, we could. We can talk about that too. I tend to you know, when people say we don't understand consciousness, I tend to say well, speak for yourself. I think there's a lot of understanding we actually do have, but we can come back to that.

Speaker 1:

But let's talk about creativity. I think AI has already demonstrated that AI can be creative. I mean it can create music. It can create new images amazing create videos. It can do that. What it doesn't seem to be able Sure, according to our aesthetic of what is valuable and what isn't valuable or how to tweak it. But in terms of computers being able to create new ideas, I think absolutely they can. Now, with AGI, the difference is why. It's not the negative hallucinations that current systems have. With an AGI, it will know when it's it will be able to put itself.

Speaker 2:

Yes, thank you, it will be able to put itself into that mode.

Speaker 1:

Yes, let me loosen the constraints and just activate.

Speaker 2:

And that's AGI, can do that. You're saying theoretically Okay, can it do that now, can your system do that? Are you working? Do that? You're saying theoretically yeah, okay, can you do that now, can your system do that? Well, are you working toward that?

Speaker 1:

Yeah, not really our current system, because we've been up to now, we've been focusing on commercial systems. You don't actually want them to come up with new ideas Interesting, you know, if you have. I mean, we basically have been focusing on replacing call center agents and you really just want them to follow the rules, the business rules, you know.

Speaker 2:

Oh yeah.

Speaker 1:

Yeah. So you don't want them to be creative and say, hey, I know how to sell this stuff better.

Speaker 2:

You don't want the agents to be that creative either. Correct? Yeah, exactly, they have a script. Yeah, they have a script. I've worked in those places. Yeah, it's very scripted.

Speaker 1:

Right. So our current system it's not something we focused on, but but absolutely ai can be, can be creative, no doubt about it. Now, you know, ultimately can it create a human art by itself. You know, will there always be a different level of artistic creativity. I'll leave that you know, but so it's.

Speaker 2:

If it is able to do this PhD level, million replicas itself and really master cancer, master, you know the nature and all the particulars and the nature of the human body and all the particulars in a way that humans can't, I don't understand why it wouldn't be able to do a Beethoven level of creativity.

Speaker 2:

So one of the things I just want to point out of what you said about what AI is currently able to do in music and art and poetry, for instance, I've had it. I even did a show on Robert Frost versus AI, where I took some of the themes of something and I tried to play around with the chat GPT to see what kind of poem it would come up with in a certain sonnet structure and I used based on one of Robert Frost's poems and it was like, in my opinion it was like a seventh grade level at best. It was very, very beginner level, very, you know right, not creative with its imagination, with its images, with its metaphors, with anything, the structure, nothing is imaginative yeah, that that's why I'm kind of hedging my bets in terms of, you know, artistic creativity.

Speaker 1:

But as far as scientific things are concerned, being able to combine different ideas and trying to combine different ideas, it can do that tirelessly, you know.

Speaker 2:

Yeah, and that's. You know you want to have that. You know I'm going to still call it a number cruncher, but advanced number cruncher. I know you have a different perspective, but yeah, and again, I think that's amazing and highly desired because we'll be able to solve many problems. We'll be able to break new breakthroughs in biomedical and longevity and just everything. Space travel will allow us to do all the things that we fantasized about breakthrough that AI, currently, as impressive as it is for most of us, is not there yet and we could all feel it and see it when we interact with it.

Speaker 2:

Right, so, again, to the degree that it can either eliminate or utilize hallucination because it seemed like you almost said what I was trying to say about creativity and hallucination, where right now it hallucinates and it just spits out by chat. Gpt will just spit it out as though it's the truth, but it sounds like what you're saying, agi, and what you're trying to work toward is that it would identify this almost as like. This is this is a hallucination, or and let me expand out to use it Correct yeah To to, to do it, yeah To do it consciously.

Speaker 2:

To do it deliberately, let's use the word. That's what I would do in a creative state right, correct I would think, you know, I might fantasize about something.

Speaker 1:

Correct and there's actually. I think a lot of people already know about Daniel Kahneman and System 1, system 2 thinking, thinking fast and slow.

Speaker 2:

Yes, that's correct.

Speaker 1:

So where system one is sort of your automatic operation.

Speaker 2:

And he talks about art in. Is it that one? Oh no, I'm sorry. I'm thinking of Blink by Malcolm Gladwell, where I think he talks about do you know? Yeah, yeah, right, have you read that, blink? I've got that. Anyway, continue on, sorry.

Speaker 1:

Right. So that's sort of the automatic thing and you know if you accomplish a driver when you're driving normally, it's system one it's you know you're not even aware of it, it's automatized.

Speaker 1:

You don't have to be aware of it. But then something happens. You know, something unusual happens and then your system two kicks in. Your conscious control takes over. Okay, I need to think about how do I deal with that situation. I just missed the turnoff or youoff, or there's a construction or whatever. I need to do something different. So that's system one and system two. Now, large language models really are system one. They don't have the sort of second level that it knows what it's thinking or what it's doing. And AGI will have both level one and level two capability, so it'll be able to direct its thought processes appropriate to the task it's trying to accomplish, and some of them will require more creativity, others will require more logical thinking, and so on.

Speaker 2:

Okay, so well. First, I wanted to say this earlier, so I'm just going to say, because you mentioned level one and two, thinking and unconscious and conscious or subconscious and conscious yeah, so you're arguing that AGI will be able to activate both Correct In similar ways that we are.

Speaker 2:

It will have both. Yeah, again is where do you see okay, let me you mentioned this a couple times, but it seems very theoretical, I think in your white paper, where you do talk about sensory perception, right? So, because this, to me, is where AI is incapable of interacting with the world is you need flesh, you need eyeballs, you need a tongue and taste and you need smell. You need all of these things, including a body physically interacting in the world. So if you don't have those, how do you actually develop a consciousness similar to humans, For instance? We know animals have some form of conscious. We don't know exactly. I don't think we know fully how it all operates, but we know there's something. It's not like ours, otherwise they'd be able to communicate in some way, but they can't because they don't have, you know, that ability, obviously, but they have some kind of consciousness. So where's the level of the body and the senses in this cognitive AI?

Speaker 1:

system Right. So a couple of things to unpack here. The first one is somewhere in my writings I mentioned that my theory of AGI is the Helen Hawking theory of AGI. Thank you, I wrote that. That's one of my questions.

Speaker 2:

Yes, tell me more about that.

Speaker 1:

So what I mean by that is that I believe AGI, and by Helen Hawking I mean Helen Keller with very limited sense acuity, and Stephen Hawking with very little dexterity and Stephen Hawking with very little dexterity. So clearly, you can be very intelligent even if you have very limited sense acuity and very limited dexterity. So you know.

Speaker 2:

Just sorry for everyone, helen Keller is blind, deaf, dumb, yes, correct, but she still had touch, correct, and she still had taste. Touch was really her only. Yeah, touch and taste, I guess was it for her Correct. And obviously Stephen Hawking, the physicist Now Stephen Hawking did have fully functioning abilities until his early 20s. I believe so he did have that memory going into his neurodegenerative phase, which was in his 20s, I believe.

Speaker 1:

Correct Okay continue Correct and Correct. It's sort of just a rough analogy.

Speaker 2:

It's a concept to help you understand.

Speaker 1:

It's a rough analogy that you clearly don't need full human sense, acuity and dexterity to be very, very smart. So the systems we're building, the AGI systems we're building, do have vision, so they can see the world and they can act on the world. So I believe there is some kind of grounding and necessary necessity to interact with the world. Now we don't know at this stage how little of that grounding you can get away with. You know, in terms of, does it need to be, you know, three-dimensional or is two-dimensional? Good enough? You know, can you extrapolate from two dimensions to three dimensions and things of that nature? So there is some grounding required. But we ultimately for AGI, we're not looking to totally replace humans in every way. We want the cognitive and problem-solving ability of a human, so can it learn the various concepts that it doesn't have? I mean, like Helen Keller could learn about what people could see, even though she didn't have vision.

Speaker 2:

So you can extrapolate things In a limited way.

Speaker 1:

Yeah, right, but if you read her writings and how she analyzed novels and so on, she clearly had a very good understanding.

Speaker 2:

Oh yes, I'm not an expert on Helen Keller. I know the Miracle Worker play, which I love the. Miracle Worker play and I've read snippets of her, so I don't know the extent of her contributions. Right, I've read like snippets of her, so I don't know the extent of her contributions, but I mean just as an example. Like I know from research and discussions with people who were born blind, they don't even have a concept of black in the way that we do.

Speaker 1:

And that to me is a fascinating sort of the other evidence is when you look at chat, gpt and large language models, it's actually amazing of how much they purport to know, or seem to know, just from language even though they clearly have no senses at all.

Speaker 2:

Yeah.

Speaker 1:

So you know it's not clear how little grounding you can get away with. Yeah.

Speaker 2:

But you do need a body right, don't you Like, in order to. So, first off, you said you don't want. You're not necessarily saying replace humans. Yeah, which is fine. I get that.

Speaker 2:

But you still are saying you're still saying there's something about a consciousness you can develop right and that. So let me get it this way. There's a about a consciousness you can develop right. So let me get it this way there's a consciousness you think you can develop through AGI. Is it human-like or human, and is there differences there? So you use the term human-like. I think AI right now is human-like.

Speaker 1:

Oh no, it's not at all human-like because it cannot learn interactively, it doesn't conceptualize, it doesn't have metacognition. So it's not at all human-like because it cannot learn interactively, it doesn't conceptualize, it doesn't have metacognition. Yes, so it's not human-like at all.

Speaker 2:

No, but that would be human. What I'm saying is, like AI is artificial, it's artificial. It's not the real human, it's our replica, our recreation of it to the best of our abilities. Agi might be the next step of that, but it still wouldn't be human. No, it wouldn't be human. Okay, so what would be the differences is what I'm trying to get at in your vision of AGI.

Speaker 2:

Between humans and AGI. Yes, yeah, yeah, so like, because again I'm saying there's human-like, which AI currently, in my view, is one form of human-like. I think a calculator is like like we could do calculations in our mind, but it could just do it a million times better. A calculator, so it's human-like in the sense of extrapolating one thing we can do and just making it a lot better. It's a bicycle for the mind, same thing with computers, but I want to understand the difference between human-like and human in regard to AGI.

Speaker 1:

So which one does AGI fit into? Human-like or human? And then why, yeah, I think you're using. Well, it's clearly not human.

Speaker 2:

So we can put that aside. So why?

Speaker 1:

not. Well, it's not going to have a human body, it's not going to have the. It's not going to have a gut feeling, for example, because it's not going to have a gut, you know. It's not going to have a heartbeat that raises when you get excited.

Speaker 2:

It's not going to start sweating. So it is those body things. This is, in my mind, the leap that was problematic for me, but the way you're describing it makes a lot of sense. It's getting more toward the human, but it's just not going to be the human, of course. Correct.

Speaker 1:

So I mean. To me, agi, the definition has always been it's the cognitive ability of a human, the problem-solving ability, the ability to learn, think and reason.

Speaker 2:

Yes. And that's it and you're trying to just improve on that. Yes, based on what it has.

Speaker 1:

And to have an artificial being that has that ability to learn and reason and solve problems, but it's not going to be a human. Okay, so there's actually also another area that I think makes a useful distinction. When we talk about emotions, the broader spectrum of emotions, I tend to separate them into two groups, roughly the one group I call cognitive emotions, emotions that are necessary for cognition. So that would include things like surprise, you know, certainty, confusion, boredom. Those are necessary for an AI to sort of really have as part and parcel of its mechanisms, you know, to be able, yeah, to know when it's surprised or confused. Now, it's not going to have the same experience that we have when we surprised, you know. But then humans have all of these other emotions, might call them reptile emotions.

Speaker 2:

Yeah, fight or flight stuff.

Speaker 1:

Correct. It's basically reproduction and survival, emotions, and those would include jealousy and love and hatred and lust and so on. Now, there's no reason to build those into an AGI. You wouldn't want to.

Speaker 2:

It would undermine their cognitive ability, as it does in humans. Okay, hold on. So why do you think that jealousy, love, hatred would impede a human's cognitive?

Speaker 1:

ability. Well, they do. They make us do stupid things, I understand for sure, but I also do stupid things, things that we regret.

Speaker 2:

For sure, but I also think that that's the route to wisdom and knowledge. So I would say that the mistakes you make, the flaws, the errors, the bad pathways, whatever that is where you learn and improve.

Speaker 1:

Yes, but it's not that an AI isn't going to make mistakes and that it can learn from. You don't need to make emotional mistakes to learn from them. You can just make cognitive mistakes. You got the wrong information somewhere and you made a mistake. You didn't think about it in the right way any number of ways. So I don't think that problem-solving ability relies on the survival, reproduction emotions. For us to grow, I mean, it's an integral part of being a human. We can't separate those out.

Speaker 2:

Yeah, I guess I'm a little bit interested in the idea of it not being integral, or at least a part of our cognitive thinking about things you see, that's a big difference.

Speaker 1:

I would fully expect an AGI to be able to recognize these emotions in humans and to know how to respond to them, but there's a difference between being able to recognize them. Think of it like a good psychologist. A psychologist will be able to recognize your distress or whatever, without feeling it. They can disengage emotionally. And in the same way, an AI will be able to recognize a full spectrum of emotions and learn how to appropriately respond or deal with them. But it's not going to inherently affect its behavior outside of the sort of therapeutic or functional means where it does for us. I mean, if we are very emotional on something, it impairs our thinking. It can dominate our behavior.

Speaker 2:

Yes, it can, but it can also direct our actions and often in a bad way, I mean.

Speaker 1:

that's why we value rationality so much, that rationality can help us override inappropriate.

Speaker 2:

Well, okay, let's, because you said earlier, like governance would be one aspect of what you think AGI could help us with. Okay, so how could you formulate something like the Declaration of Independence without a love of freedom?

Speaker 1:

Something like the Declaration of Independence without a love of freedom. Well, we should be on the same page on that, because we believe that ethics itself is a rational, should be treated like a science. You should basically come to the conclusion if the purpose of ethics, of morality, is human flourishing, broadly speaking, if that's the purpose of ethics and you treat ethics as a science, then certain things follow, and individual freedom is one of the things that follows for humans to be able to flourish. So it's a logical conclusion that an AI should come to. It should come to the same conclusion.

Speaker 2:

But you don't think that integral to that conclusion is the love of controlling your own future destiny and existence.

Speaker 1:

No, it's a logical conclusion for humans to flourish. A flourishing is the goal of.

Speaker 2:

but why is it a logical? Why is the flourishing logical to freedom? Yeah?

Speaker 1:

No freedom is necessary for flourishing, so flourishing is a starting point.

Speaker 2:

But what about humans? Makes that integral is what I'm saying so they could pursue their loves and passions and desires, Right?

Speaker 1:

And their thoughts. Agis are not going to have their own motivation in terms of wanting to flourish themselves.

Speaker 2:

So I don't look. What I'm saying is I don't. I can definitely understand how AGI and AI today can take an argument formulated by humans and understand the logic of the argument, but what I'm saying is it sounds like you're making a claim that they could, that AGI on its own could come up with the concept. Like you know, the Declaration of Independence, freedom on its own.

Speaker 1:

Yes.

Speaker 2:

Without love, and interest and desire. Yes absolutely so. That's where I'm confused is yes, I agree that there's a logic to it, but the discovery was not. There's not an A B, there's not like a linear line to figuring out the discovery of the importance of flourishing as humans, because a lot of other philosophies have different views.

Speaker 1:

The importance of human flourishing is the given, but it wasn't given.

Speaker 2:

It's a discovery. Someone had to discover and figure that out. Like for a long time people did not think that humans, all humans, should be treated equally. That's not an obvious argument throughout history, it's a new argument and it's the right argument, and it took us thousands of years to figure it out. But we always thought that there was. You know, nobody liked to be a slave, for instance, but we understood that it was integral to actual flourishing and human existence. Without it we'd all die.

Speaker 1:

So it became— Well, you've just said that. It follows from that.

Speaker 2:

It follows from— what I'm saying is it follows in retrospect because we know this now. You could not have known this 500 years ago, 1,000 years ago. I don't blame Aristotle, for instance, for not freeing slaves. He didn't have the context of knowledge necessary. So what you're saying is yes, I could see AGI taking all that we've discovered, but you're making the claim, I think, that if you know, if we go back to the 4,000 years ago, agi would be able to figure it out on its own.

Speaker 1:

Well, it's not omniscient, you know, and I mean we discover things partially because of experience, of things that actually happen in the world. I mean, we do look at history, we look at experiments. I mean, you know, at some point in my life I also thought that communism sounded like a good idea. You know, okay, yeah, you know, hey, you know, just, some of us born with certain skills and you know that, you know, maybe we should contribute more than people who have less of those skills anyway.

Speaker 2:

Sure, so you know, so, oh, you're saying the maxim from each accorded to his ability to. Yeah, yeah, right, you know there was that sounded good to you at a certain point.

Speaker 1:

Yeah, it sounded good to me, it sounded reasonable to me at the time. And you run the experiment and you say, oh, okay, well, I didn't think that you know it doesn't quite work out.

Speaker 2:

And humans are humans, they're not robots. Yeah, it doesn't quite work out that way.

Speaker 1:

So, yes, an AGI I mean in any case, agi will have all of the history and the experience that we've gone through as humans, so it would come up with that now, you know, by the time, because we were yeah.

Speaker 2:

So what I'm saying is I think we're saying the same thing. I agree with that. I'm trying to get at this idea. Like you're saying it could do it without, like it could come to the same conclusions without.

Speaker 1:

Much sooner than we would have. Actually. That that is a claim I'm making is because it's much more rational things that we say in hindsight. Well, of course that makes sense. An agi is likely to have discovered that sooner, because we are. We are not actually very good at coming to new conclusions. It's very, you know it's uncomfortable for humans.

Speaker 2:

It is very uncomfortable for many humans for sure, Right.

Speaker 1:

So there is an inertia and you know, and, as you say, slavery, some people benefited from that.

Speaker 1:

So, you know, let's take politics right now. Let's take politics right now, no-transcript. That has to be a rational conclusion that that can't be the best system. And AI and AGI would be able to help us think through of what the pros and cons of a better system would be. And once that better system is implemented, would be like the Declaration of Independence would say well, of course, it's obvious that we shouldn't have had this structure of selecting the president or the powers, or what power president should have, etc. Etc. So I mean, in terms of objectivist principles, an AGI, I believe, would come up with very much objectivist principles Without Ayn Rand yes, yeah, see, that's why I just don't see that at all.

Speaker 2:

I guess yeah, absolutely. But how? By reasoning, by thinking through what are the logical— Would they be able to understand the physical, real-world constraints? Yes, absolutely how. Yes, absolutely how.

Speaker 1:

Well, I mean, we already see that in chat GPT on how this relatively dumb system can already understand a lot about the physical world. You learn about it thing, but understanding the external physical world. There's no reason why an AI wouldn't understand the external physical world and how physics works and how the world works, how atoms work, how gravity works and stuff like that. Sure.

Speaker 2:

Yeah, and again, I think because humans did the physical part of it right, because humans wrote down really well for hundreds and even thousands of years, the explanation for why if you drop, you know, a bowling ball from the top of a Leaning Tower of Pisa, it goes straight down. If you drop it from the top of a moving ship, it goes straight down. And it's like why does that happen? It seems like it would move at certain points. Like there's certain physics that what's that?

Speaker 2:

Yeah, oh yeah, there's certain physics that now I don't think, I don't know how a, an ai agi would be able to come up with that without the humans having done it. Well, like, how would you logically do that without moving in space and time and experimenting in the real world? Well, so you can. You can synthesize, because everything you're saying to me again, I love the idea of synthesizing the knowledge that humans have observed and experimented and did things like that, and this is a new tool, but it does seem like you keep making the claim that it would be able to do what a human could do without the human.

Speaker 1:

Yeah, I do, and you seem to agree with that when you're talking about cancer research, for example, to build on the knowledge we already have.

Speaker 2:

But I think guided by a human. But yeah, you what I think? Guided by a human, but no, no, I know you don't agree with that. Yeah, I don't see where the. This is where we have a disagreement.

Speaker 1:

Okay, why would human guidance be? Because then you are somehow believing that the AGI cannot reason as well as a human.

Speaker 2:

If it can reason as well as a human, no, it can reason as well as a human, cognitively in its mind, based on the data given to it by humans. But how do you get data? How does a researcher get data? Well, they experiment on real things in the real world.

Speaker 1:

Well, I mean Stephen Hawking, didn't? He came up with a series just by thinking about them. For one, I'm not an expert on Stephen Hawking, but didn't he synthesize them? But in AGI if experiments need to be done, then it can get humans to do those experiments.

Speaker 2:

Okay, so then the humans aren't yeah.

Speaker 1:

Sure, until we have robots, and then it'll get robots to do the experiments Get the robots to yes.

Speaker 2:

So you know Well. And to me robot AGI makes more sense than in a computer because the robot is interacting with the world Right. So like Boston Dynamics and what Tesla's creating.

Speaker 1:

If that's easier for you to kind of wrap your head around we can say you know, once we have AGI, we can put that brain into a robot. It's just robotics itself requires.

Speaker 2:

And then they can dissect things. They can like look into things.

Speaker 1:

They can use microscopes to see. But even a brain in a box, in a computer, can you know you have remote surgery tools? You have, you know, you have. You know you can control you can control dumb robots, basically so you can interact with the world through external tools so, yeah, so it's you.

Speaker 2:

You are acknowledging the importance of the senses and the interaction with the real world for experiments?

Speaker 1:

yes, there are. There are a whole lot of things you…. Well, for experiments, for knowledge, yes, that's the whole point of experiments yeah, but there's such a vast amount of latent knowledge out there when you think of all of the research papers that nobody has time to read or to really digest and analyze just by going through them of how much knowledge there is out there that isn't being utilized because we don't have enough people that have the time or patience or motivation to actually harvest this information.

Speaker 2:

Yeah, and just again to repeat what I've said earlier. So we're on the same page here. I am very excited about I'm going to just call AI broadly as a tent, including AGI just the tool for me of being able to integrate mass amounts of data that a human is unable to do on our own and that will allow us, as humans and cancer researchers and biomedical researchers and so on, to be able to take that data and come up with new things, new integrations, new tools, new things and even use the AI and AGI to help and even maybe come up with some of their own. But it's still, to me, always directed by the human, and one of the problems you put in here is constantly need for a human in the loop. One of the problems you put in here is constantly need for a human in the loop. Do you think like because you phrased it constantly in need of the human in the loop? Are you trying to reduce the human in the loop or eliminate the human in the loop? Part?

Speaker 1:

So the ultimate goals that we want, like cancer research or whatever, and the personal assistant will be guided by what we want to do, but essentially after that it has to operate autonomously to a large extent.

Speaker 2:

It has to kind of find its own pathways. Yeah, yeah, right, so it can operate autonomously. Now the implication of AGI you can't beat around the bush, it'll be massive, massive unemployment, because there'll be very few jobs where humans will provide a better value.

Speaker 1:

Basically, yeah, I mean I don't know, I guess… Well, it follows from AGI. If you agree with the description of AGI, it follows from AGI. If you, if you agree with the description of of AGI, it follows from that.

Speaker 2:

I mean, first of all, any desk bound job, any programmer, any accountant, any you know, yeah, no, I mean like AI is already replacing jobs, as it is right now.

Speaker 1:

Actually, it's generating more jobs and it's replacing Well, but I think AGI would do the same thing. No.

Speaker 2:

Yeah, I know, because this is the crux of our disagreement. I think, Right, I don't believe it is really any like there's something special about the human consciousness and mind and I don't believe that AGI is going to do that. Okay, it'll just be an amazing extra tool that's advanced. And you know, I really like your three problems the propensity to hallucinate, inability to update core model incrementally in real time and constant need for Actually the fourth one.

Speaker 1:

is that the absurd cost of it?

Speaker 2:

Yes, yeah, and you did mention that in other places where the energy consumption of what it takes to do chat GPT for instance, and 10 trillion tokens to train when the energy consumption of what it takes to do chat. Gpt, for instance, is just enormous compared to the efficiency of the human mind. So to me you've identified and other AI work. People in the field are identifying core problems to AI as it stands.

Speaker 2:

So yes, the elimination and the reduction of those problems is immense value. So, yes, the elimination and the reduction of those problems is immense value. To any degree you can improve that, incrementally or by leaps and bounds, is a fantastic leap Because of the things that I've said sensory perception, physical, the emotional I do think that those are integral to learning and experience, knowledge and wisdom, not only about the world around us. Which is the easiest thing, I think, for AI to replace is our understanding of the physical world, but I definitely don't see it as an ability to understand the human world, which to me is infinitely more complex, and so so I. But again, this is someone who I'm very excited. You know I'm an outsider who's excited by your abilities to draw, you know to eliminate these problems Right, but it's going to be hard for me to be convinced of.

Speaker 1:

Well then, you know, you really have to drill down and say well, what specifically, what specific tasks can an AGI not replace?

Speaker 2:

in principle, Well, I think I'm a little suspicious of the governance part, because I know there's a lot of philosophies out there that are, you know, claim to be very logical, and I think they are pretty logical.

Speaker 2:

And if you don't have enough of the you know what it's like to feel pain, for instance then you know you may it may be logical that America, because of the thing you said, we need a revolution right now, but if you don't take into consideration the that there's no afterlife, that we only have one life to live, you know what's the level like. When do we decide when it's do or die, we're going to die for this cause? When? When do we decide to kill for this cause? When do we decide when it's do or die, we're going to die for this cause? When do we decide to kill for this cause? When do we decide to hurt people, torture them for information? Those are the kinds of things you need to integrate and synthesize in your mind if you're going to have a revolution on some level. You have to know that death and pain and misery and possible failure is part of that process. And if you're saying you're taking away all of that from AI, its logic will not.

Speaker 1:

No, not taking it away. Remember I said that it will be able to understand these human emotions and drives.

Speaker 2:

It'll be able to know about them, it can't understand it. I cannot understand what a pregnancy feels like. I can study it, I can analyze it, I can understand it, but the emotional psychology and all that goes into it, I can never understand it as a man. All right, I can read poetry about it and get a little bit as close as possible, right. But I'll never really.

Speaker 1:

So my argument is that a computer can never, yeah, be allowed or not. You take a logical approach to that, taking into account how a woman feels about it, psychology and all the other factors, and an AGI will be able to do the same. Just because it can't feel those emotions itself doesn't mean it can't take into account the implications of them and the consequences.

Speaker 2:

Yeah, again, then it would have to be different people's opinions that are leading to its conclusions, rather than the real intuitive understanding of it.

Speaker 1:

Well, no more than the example you used was pregnancy. I mean you have to survey people and women who have experienced pregnancies, women who have experienced abortions and draw conclusions from that.

Speaker 2:

Those are two different things. So, coming up with the political understanding of the right to abortion, I don't think you need to experience pregnancy to say that every individual you can come to your own understanding of this that every individual has to have the right to their own autonomy over their own body and decisions over their own body, and that includes a man and that includes a woman, and a woman has to have that. I don't need to know what it feels like to have an abortion in order to know that. Yes, I agree with that. But what I'm saying is there's something about the, the experience like. So my argument is about the experience of that as such. If, without the experience, there's a whole realm of decision making that an individual will make, that will lead them to you know the kinds of decisions they make in the world. So you know, yes, I can, so I'll.

Speaker 2:

I agree with you in the sense that I don't need to experience pregnancy in order to know, in order to understand the right to abortion, which I agree with, like, I don't think I need to experience that, but I do think you need to experience to have some kind of experience to understand when it's, you know good and bad. For you to do that on your own, so like the political right to do it is one thing, but for you to make that you know. For a woman to make that decision, you know for herself, that's course. There are jobs where we may prefer to have humans do it, like teachers.

Speaker 1:

We may prefer to have our kids taught by humans, at least to a certain degree, even though the robots could do a much better job of it, but we may still want to have humans as part of the curriculum.

Speaker 2:

I just don't know how they could do a better job of it Because, like I said, they don't understand what it's like to be a child.

Speaker 1:

Oh, I mean currently teachers do a terrible job. I didn't say they do a good job.

Speaker 2:

I agree that teachers and parents tend to mess up more than anything, but that's part of the human experience. By the way, more than anything but that's part of the human experience, by the way is the improper education and emotional problems that can happen from parents and teachers to children.

Speaker 1:

I mean. In fact, teaching is already one of the areas where even systems like ChatGPT are starting to make inroads.

Speaker 2:

Well, so again this goes back to my original premise of I agree, and I think it's exciting for me that what ChatGPT is able to do, and what AGI you know as to the degree that it solves those four core problems you identified. What it'll able to do in education, for instance, is personalize things. So you could have a subject expert teacher who maybe gets to know less students or some of the students, and then chat GPT or something could help personalize. Yeah, and I think there's a lot of again, I think there's a lot of values in that. I don't see the jump that you make in terms of it replacing humans in the core function of the human imaginative, creative thinking, unique mind. That is a human. That I just cannot put my head around yet. But again, like a lot of the stuff I'm super bullish about, Well, you should think about which professions you know.

Speaker 1:

If you have an AGI that can think and learn and reason better than a human, what jobs there are that you would prefer to employ Well the direction of them. For one, Sure, but how much of our time does that take?

Speaker 2:

Oh, I don't know. I think this is something that is unfathomable to us I can never say that word and unfathomable. So I believe there will probably be whole new realms of things opened up, just like AI has opened up new realms for us to already take into consideration. So, like one person can, you know, control SpaceX? So then you can have something bigger and better than SpaceX. You can have new forms of energy. We could be in different, you know, realms. Maybe we're going. You can have new forms of energy. We could be in different realms. Maybe we're going to be explorers exploring the whole universe. I don't know. It'll be like Star Trek, where we're around and a big part of our job is going out and exploring the world, the universe. That could be a part of a big yeah, there's unimagined things that we don't know what it'll be. And again, I still think what we'll get to is more of an understanding of that unique human thing.

Speaker 2:

I think there's something special about the human mind that AI can replicate on certain levels, that it can, you know, be like dentures to teeth. So it's, it's good, but not quite as good as the original it's. It's got some great things to it and that's how. I mean. That's just my view at the moment, and I'm hoping to be wowed. I'm excited about robots. I'm not fearful of any of these things. I'm excited by all of them and I think they just open up opportunities for us to, you know, individuate. Focus on things that we want us to, you know, individuate focus on things that we want and the life we want.

Speaker 1:

Yeah, I mean, one of the things is certainly reducing the cost of goods and services will be a benefit to humanity. Having improved research, you know, having research accelerated, that'll be a benefit to humanity. And having our personal personal assistant almost everybody I talk to say, hey, when can I have that? That? I would love yeah. So you know, those are things we can look forward to with AGI.

Speaker 2:

So why don't we end with what are the most significant challenges we face in achieving the true AGI and how is IGO, your company, addressing some of those challenges?

Speaker 1:

Yeah, so really, the biggest challenge in my mind in achieving AGI is that right now, we have a monoculture of AI research and funding. It's basically all about big data. Right now. The big companies have a lot of data, they have a lot know. The big companies have a lot of data, they have a lot of computing power, they have a lot of money. So that's a hammer. They've got to everything. They try to make everything look like a nail, but it's fundamentally on the wrong track. I mean, the chief scientist at Meta, jan LeCun, recently said large language models are an off-ramp to AGI, a distraction, a dead end. That's how strongly he puts it, and I totally agree with him in that regard. The current effort in AGI is completely misdirected. You know, if you're trying to go north to get to human-level intelligence, but you're facing that way, it doesn't matter how much money you throw at it, you're not going to get.

Speaker 2:

You're not going to go north, but the money is going to go that way because it's making money right, Right.

Speaker 1:

But that's starting to change. Where people like Jan LeCun says look, we're on the wrong track, he's actually saying to he's giving advice to students who study AI. He says don't study large language models. That's not where the future is, so study large language models. That's not where the future is. So we are seeing a shift in that. There was a recent report as well, with people saying this is a bubble that's going to burst the amount of money that's being invested in generative AI. We are not seeing the returns.

Speaker 2:

The kind of return, yeah, the kind of return.

Speaker 1:

There's a limitation to what MLMs can do, so as soon as we start seeing a shift of people looking at another way of doing it that's closer to the way the human mind works, the 20 watts and 5 or 10 million words before we can get language we are going to. I believe we're going to get AGI soon. I believe within could be less than five years. It could be less than five years. Now our own company we've just recently, a few months ago, decided to put our commercial business on hold so that we could focus 100% on getting to AGI. So currently the only real constraint is money, as we see it. I mean we have some problems to work through, some development things to work through, but really the constraint is not in my mind, is not theoretical, it's not computing power, it's just having. You know, we only have 10 people, 12 people in the company now. It'll take forever to do it with 12 people. Yeah, you know.

Speaker 1:

So we're looking to hire another 50 people to accelerate this development of AGI. So I believe we can have AGI soon, whether through our efforts or other companies that go away from the focus on large language models and look at something more like what DARPA calls the third wave of AI, which is cognitive AI. Your starting point is understanding intelligence. If you want to build an intelligent system, you really have to first understand intelligence, and that's sort of a cognitive, psychology and philosophy endeavor, yes, and then you have to have… that's one of those things, I think, is another long conversation.

Speaker 1:

Yeah, and then you also have to have the computer science knowledge. Yeah, 20 years ago cognitive science was a big topic. You don't really hear about cognitive science anymore. Cognitive science is basically the combination of philosophy, cognitive psychology and computer science, and we need more people having that perspective on AI, the cognitive science perspective. What is intelligence, or what is important in human intelligence? And then how do we build it? Not how much data do we have and how much compute do we have.

Speaker 2:

Okay, do you mind if I ask one follow-up? Sure, okay, because I know we're running out of time, but because this is, you know, I've talked to Chad Mills, who's a data scientist and on this, about AI and some of the stuff he does with that. To me, this is all very fascinating. I've never heard of applying the insights from and I already hear people making fun of me for this but the insights from the poets and the artists and those types of the applicators of ideas and knowledge and thinking about, especially particularly humans and things like that. And I'm just wondering you know so. So you mentioned philosophy, you mentioned psychology and what's the other one? There's computer science and computer science and those three.

Speaker 2:

You know fields, but if you're trying to replicate humans and if you look back at human existence, I think all the great improvements happened in the arts. You know like they are the ones who see things first. They don't know how to do it, but they show the way in a lot of, in many ways, and even with when it comes to things like perspective and understanding, like that's something that, and so it seems like you know maybe you need to add the arts into that Cause if you're going to train a human, you want to train them in the arts as well, at least on some level. You want to include the arts, the sciences, matt, and so if you're going to train a human-like model, why wouldn't you include poetry?

Speaker 1:

that's what I'm here to advocate for the poetry well, it's one of the first things they did in chat, gpt, you know you did no chat, gpt. What they did is basically can you write me a poem in the style of some Bob Marley, you know?

Speaker 2:

So this is my problem. Okay, I'm sorry, this is my problem. I have nothing against Bob Marley, but Bob Marley is no Coleridge Right, he's no Keats.

Speaker 2:

Okay, I shouldn't say that, but what I'm saying is, like, when people use the example of the arts they're using, like in the history of art, in my, my view, the quality of the last hundred years is here versus back then. It was here, just the overall quality of it, like, these are just, you know, like, and so my you know just, and especially, that they created, like you know, bach, creating something new that was just profound and unique, and new integrations. It's like what we're doing now is just like, you know, the thimble, or like a little bell. It's, you know, it's the most primitive quality of stuff. So what I'm saying is like, why not teach them? Teach them like, if you're going to see the quality of chat, gp or ai or agi, don't ask it to do a bob marley tell it to create a new form of literature and and do it on the quality of.

Speaker 2:

You know a melville, a rand, you know dostoevsky. You can be our advisor on that.

Speaker 1:

I would love to, if we get there something to do with the arts.

Speaker 2:

That's what I'm interested in, but either way, I mean there's lots of poetry and the arts is one thing that is just as codified or more codified than as science is, and it seems like that's a good thing to kind of train AI on. That's all I'm just advocating, for I want to train kids and adults on the great canonical works of the Western world. I think AI should be just as trained Interesting thought.

Speaker 1:

That's my only answer. I mean, to me art has really been sort of science fiction-inspired stuff for AGI, which is important? Yeah, for sure, but it's an interesting thought. I hadn't really considered that.

Speaker 2:

Okay, well, there you go. Boom it's done, it's going to happen, the poetry we're going to teach the. Agi to write a Wordsworth poem. All right, okay. So any last thoughts, peter, before we leave.

Speaker 1:

No, I think. Hopefully I managed to get a few more people excited about how beneficial AGI can be and anybody who wants to help us make that happen in whatever way spreading the word or writing us a check or whatever you know where to find us All.

Speaker 2:

Right, aigoai, yep. Great Okay. Thank you, peter. Thank you.