How To Write A Business Book That Matters

Latest Comments

No comments to show.
Uncategorized

How To Write A Business Book That Matters written by John Jantsch read more at Duct Tape Marketing Marketing Podcast with Josh Bernoff In this episode of the Duct Tape Marketing Podcast, I interview Josh Bernoff. He is the bestselling author or ghostwriter of eight business books and contributed to 50 book projects that have generated over $20 million for their authors. Josh was formerly Senior Vice President, Idea Development at Forrester, where he […] Unveiling The Future Of AI written by John Jantsch read more at Duct Tape Marketing Marketing Podcast with Kenneth Wenger In this episode of the Duct Tape Marketing Podcast, I interview Kenneth Wenger. He is an author, a research scholar at Toronto Metropolitan University, and CTO of Squint AI Inc. His research interests lie at the intersection of humans and machines, ensuring that we build a future based on the responsible use of technology. His newest book, Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI. Kenneth explains the complexity of AI, demonstrating its potential and exposing its shortfalls. He empowers readers to answer the question: What exactly is AI? Key Takeaway: While significant progress has been made in AI, we are still in the early stages of its development. However, the current AI models are primarily performing simple statistical tasks rather than exhibiting deep intelligence. The future of AI lies in developing models that can understand the context and differentiate between right and wrong answers. Kenneth also emphasizes the pitfalls of relying on AI, particularly in the lack of understanding behind the model’s decision-making process and the potential for biased outcomes. The trustworthiness and accountability of these machines are crucial to develop, especially in safety-critical domains where human lives could be at stake like in medicine or laws. Overall, while AI has made substantial strides, there is still a long way to go in unlocking its true potential and addressing the associated challenges. Questions I ask Kenneth Wenger: [02:32] The title of your book is the algorithm plotting against this is a provocative question. So why ask this question? [03:45] Where do you think we really are in the continuum of the evolution of AI? [07:58] Do you see a day when AI machines will start asking questions back to people? [07:20] Can you name a particular instance in your career where you felt like “This is going to work, this is like what I should be doing”? [09:25] You have both layperson and math in the title of the book, could you give us sort of the layperson’s version of how it does that? [15:30] What are the real and obvious pitfalls of relying on AI? [19:49] As people start relying on these machines to make decisions that are supposed to be informed a lot of times, predictions could be wrong right? More About Kenneth Wenger: Get your copy of Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI. Connect with Kenneth. More About The Agency Certification Intensive Training: Learn more about the Agency Certification Intensive Training here Take The Marketing Assessment: Marketingassessment.co Like this show? Click on over and give us a review on iTunes, please! Duct Tape Transcript Email Download New Tab John Jantsch (00:00): Hey, did you know that HubSpot’s annual inbound conference is coming up? That’s right. It’ll be in Boston from September 5th through the eighth. Every year inbound brings together leaders across business, sales, marketing, customer success, operations, and more. You’ll be able to discover all the latest must know trends and tactics that you can actually put into place to scale your business in a sustainable way. You can learn from industry experts and be inspired by incredible spotlight talent. This year, the likes of Reese Witherspoon, Derek Jeter, Guy Raz, are all going to make appearances. Visit inbound.com and get your ticket today. You won’t be sorry. This programming is guaranteed to inspire and recharge. That’s right. Go to inbound.com to get your ticket today. (01:03): Hello and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch. My guest today is Kenneth Wenger. He’s an author, research scholar at Toronto Metropolitan University and CTO of Squint AI Inc. His research interests lie in the intersection of humans and machines, ensuring that we build a future based on the responsible use of technology. We’re gonna talk about his book today Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI. So, Ken, welcome to the show. Kenneth Wenger (01:40): Hi, John. Thank you very much. Thank you for having me. John Jantsch (01:42): So, so we are gonna talk about the book, but I, I’m just curious, what, what does Squint AI do? Kenneth Wenger (01:47): That’s a great question. So, squint ai, um, is a company that we created to, um, do some research and develop a platform that enables us to, um, (02:00): Do, do AI in a more responsible, uh, way. Okay. Okay. So, uh, I’m sure we’re gonna get into this, but I touch upon it, uh, in the book in many cases as well, where we talk about, uh, ai, ethical use of ai, some of the downfalls of ai. And so what we’re doing with Squint is we’re trying to figure out, you know, how do we try to create a, an environment that enables us to use AI in a way that lets us understand when these algorithms are not performing at their best, when they’re making mistakes and so on. Yeah, John Jantsch (02:30): Yeah. So, so the title of your book is The Algorithm Plotting Against, this is a bit of a provocative question. I mean, obviously I’m sure there are people out there that are saying no , and some are saying, well, absolutely. So, so why ask the question then? Kenneth Wenger (02:49): Well, because I, I actually feel like that’s a question that’s being asked by many different people with actually with different meaning. Right? So it, it’s almost the same as the question of is AI posing an existential threat? I, I, it’s a question that means different things to different people. Right. So I wanted to get into that in the book and try to do two things. First, offer people the tools to be able to understand that question for themselves, right. And first figure out how, where they stand in that debate, and then second, um, you know, also provide my opinion along the way. John Jantsch (03:21): Yeah, yeah. And I probably didn’t ask that question as elegantly as I’d like to. I actually think it’s great that you ask the question, because ultimately what we’re trying to do is let people come to their own decisions rather than saying, this is true of ai, or this is not true of AI . Right. Kenneth Wenger (03:36): That’s right. That’s right. And, and, and again, especially because it’s a nuanced problem. Yeah. And it means different things to different people. John Jantsch (03:44): So this is a really hard question, but I’m gonna ask you, you know, where are we really in the continuum of, of AI? I mean, people who have been on this topic for many years realize it’s been built into many things that we use every day and take for granted, obviously we ChatGPT brought on a whole nother spectrum of people that now, you know, at least have a talking vocabulary of what it is. But I remember, you know, I’ve been, I’ve been, I’ve had my own business 30 years. I mean, we didn’t have the web , we didn’t have websites, you know, we didn’t have mobile devices that certainly now play a part, but I remember as each of those came along, people were like, oh, we’re doomed. It’s over . Right. So, so currently there’s a lot of that type of language surrounding ai, but where do you think we really are in the continuum of the evolution? Kenneth Wenger (04:32): You know, that’s a great question because I think we are actually very early on. Yeah. I think that, you know, we, we’ve made remarkable progress in a very short period of time, but I think it’s still, we’re at the very early stages. You know, if you think of ai where we are right now, we were a decade ago, we’ve made some progress. But I think the, fundamentally, at a scientific level, we’ve only started to scratch the surface. I’ll give you some examples. So initially, you know, the first models, they were great at really giving us some proof that this new way of posing questions, you know, the, uh, neural networks essentially. Yeah, yeah. Right. They’re very complex equations. Uh, if you use GPUs to, to run these complex equations, then we can actually solve pretty complex problems. That’s something we realized around 2012 and then after around 2017, so between 2012 and 2017, progress was very linear. (05:28): You know, new models were created, the new ideas were proposed, but things scaled and progressed very linearly. But after 2017, with the introduction of the model that’s called the Transformer, which is the base architecture behind chat, g, pt, and all these large language models, we had another kind of realization. That’s when we realized that if you take those models and you scale them up and you scale them up in, in terms of the size of the model and the size of the data set that we used to train them, they get exponentially better. Okay. And that’s when we got to the point where we are today, where we realized that just by scaling them, again, we haven’t done anything fundamentally different since 2017. All we’ve done is increase the size of the model, increase the size of the dataset, and they’re getting exponentially better. John Jantsch (06:14): So, so multiplication rather than addition? Kenneth Wenger (06:18): Well, yes, exactly. Yeah. So, so it isn’t, the progress has been exponential, not only in linear trajectory. Yeah. But I think, but again, the fact that we haven’t changed much fundamentally in these models, that’s going to taper off very soon. It’s my expectation. And now where are we on the timeline? Which was your original question. I think if you think about what the models are doing today, they’re doing very element. They’re doing very simple statistics, essentially. Mm-hmm. , they’re not the idea of, of these models being called artificial intelligence. Right. I think it’s a bit of a misnomer sometimes. I agree. And it leads to some of the questions that, that people have. Um, because there, there isn’t much like deep intelligence going on, it’s just statistical modeling and very simple at that. And then where we are going from here and what I hope the future is, that’s when we start, I think the thing, things are gonna change dramatically when we start getting models that are able not just to, not just to do simple statistics, but are able to understand the context of what it is they’re trying to achieve. Yeah. And are able to understand, you know, the right answer as well as the wrong answer. So, for example, they, they, they, they’re able to know when they’re talking about things they know and when they’re kind of skirting around this gray area of things they don’t really know about. Does that make sense? Yeah, John Jantsch (07:39): Absolutely. I mean, I totally agree with you on artificial intelligence. I’ve actually been calling it ia. I think it’s more of informed automation. is kind of how I look at it, at least in my work. Do you see a day where, you know, prompts asking questions are, you know, that’s kind of the, the street use, if you will, of AI for a lot of people. Do you see a day where it starts asking you questions back? Like, why would you wanna know that? Or what are you trying to achieve, uh, by asking this question? Kenneth Wenger (08:06): Yeah. So the, the, the simple answer is yes. I, I definitely do. And I think that’s part of what, what achieving a higher level intelligence would be like. It’s when they’re not just doing your bidding, it’s not just a tool. Yeah, yeah. Uh, but they, they kind of have their own purpose that they’re trying to achieve. And so that’s when you would see things like questions essentially, uh, arise from the system, right? Is when they, they have a, a, a goal they wanna get at, which is, you know, and, and then they figure out a plan to get to that goal. That’s when you can see emergence of things like questions to you. I don’t think we’re there yet, but yeah, I think it’s certainly possible. John Jantsch (08:40): But that’s the sci-fi version too, right? I mean, where people start saying, you know, the movies, it’s like, no, no, Ken, you don’t get to know that information yet. I’ll decide when you can know that . Kenneth Wenger (08:52): Well, you’re right. I mean, the question, the way you asked the question was more like, is it, is it possible in principle? I think absolutely. Yes. Yeah. Do we want that? I mean, I, I don’t know. I guess that’s part of, yeah, it depends on what use case we’re thinking about. Uh, but from a first principle’s perspective Yeah, it is, it is certainly possible. Yeah. Not to get a model to John Jantsch (09:13): Do that. So I, I do think there are scores and scores of people, they’re only understanding of AI is I go to this place where it has a box and I type in a question and it spits out an answer. Since you have both layperson and math in the title, could you give us sort of the layperson’s version of how it does that? Kenneth Wenger (09:33): Yeah, absolutely. So, well, at least I’ll try, lemme put it that way, , when, a few moments ago when I mentioned that these models, essentially what they are, they’re very simple statistical models. That itself, that phrase itself, it’s a little bit of, it’s controversial because at the end of the day, we don’t know what kind of intelligence we have, right? So if you think about our intelligence, we don’t know whether at some level we are also a statistical model, right? However, what I mean by AI today in large language models like ChatGPT being simple statistical models, what I mean by that is that they’re performing a very simple task. So if you think of ChatGPT, what they’re doing is they are trying, essentially to predict the next best word in a sequence. That’s all they’re doing. And the word, the way they’re doing that is that they calculate what are called probability distribution. (10:31): So basically for any word in a, in a, in a prompt or in a corpus of text, they calculate the probability that word belongs in that sequence. Right? And then they choose the, the next word with the highest probability of being correct there. Okay? Now, that is a very simple model in the following sense. If you think about how we communicate, right? You know, we’re having a conversation right now. I think when you ask me a question, I, I pause and I think about what I’m about to say, right? So I have a model of the world, and I have a purpose in that conversation. I come up with the idea of what I want to respond, and then I use my ability to produce words and to sound them out to communicate that with you. Right? It might be possible that I have a system in my brain that works very similar to a large language model, in the sense that as soon as I start saying words, the next word that I’m about to say is one that is most likely to be correct, given the words that I just said. (11:32): It’s very possible. That’s true. However, what’s different is that at least I already have a plan of what I’m about to say in some latent space. I have already encoded in some form. What I want to get across, how I say it, that the ability to pro to produce those words might be very similar to a language model. But the difference is that a large language model is trying to figure out what it’s going to say as well as coming up with those words at the same time. Mm-hmm. , right? Does that make sense? So it’s a bit like they’re rambling, and sometimes if they talk for too long, they ramble in a nonsense territory. Yeah. Yeah. Because they don’t know what they’re going to say until they say it. . Yeah. So, so that’s a very fundamental difference. Yeah. John Jantsch (12:20): I, I, I have certainly seen some output that is pretty interesting along those lines. But, you know, as I heard you talk about that, I mean, in a lot of of ways that’s what we’re doing is we’re querying a database of what we’ve been taught, are the, the words that we know in addition to the concepts that we’ve studied, uh, and are able to articulate. I mean, in some ways we’re querying that to me, prompting or me asking you a question as well, I mean, it works similar. Would you say Kenneth Wenger (12:47): The aspect of prompting a question and then answering it, it’s similar, but what is different is the, the concept that you’re trying to describe. So, again, when you ask me a question, I

Recommended Story For You :

How To Make $3493 Commissions Without Doing Any Selling

Successful dropshippers have reliable suppliers.

People Think I Use A Professional Voiceover Artist. NO! I Just Use Speechelo!

Make Money Testing Apps On Your Phone Or Tablet

Make More Money or Lose Everything

Sqribble Is The ONLY eBook Creator You’ll Ever Need.

Work & Earn as an Online Assistant

Create Ongoing Income Streams Of $500 To $1000 Or More Per Day

It's The Internet's Easiest Side Business.

without the right system making money on the web is almost impossible.

Tags:

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *