I think critical thinking is a little bit at risk with AI being able to create research reports and provide answers very confidently. And there's nothing wrong with that. I mean, I use all kinds of AI models to help me in my thinking. So I'm not advocating that using that is bad. The point I'm making though, is although everything sounds good, it becomes really important to know when to take action. And on what pieces of content of the AI-generated strategy, or recommendations or whatever, what should you action on? Because if ChatGPT, for example, or Perplexity or whatever, uses the wrong source, but is able to sound convincing, you're now putting yourself in a position to make a very big business mistake if you base your strategy and decision-making on the output of AI without vetting it and thinking about it from a critical perspective.
Another interesting thing, I remember for the first year when the first year ChatGPT came out, you had all these people coming out with prompts, like long, long-ass prompts to help you get to the best output. And I would save a bunch of them. And to be honest, it's hard to know if it worked, because to go from my regular prompt to then a long prompt, I didn't compare. Or maybe once in a while, I would just compare one line, one answer, or two back and forths, but never really in-depth comparison. So it's hard to say if they were effective. Let's assume they were, or let's assume it doesn't even matter. You had all these people, and there are still people posting about the best prompts and all that stuff. In my opinion, when you're using very elaborate prompts, you're asking the AI model to produce the best case scenario in the least amount of instances or prompts, with the least amount of back and forth. And I'm not saying just go in there, give me a marketing plan. No. I do say, act like a marketing director with 100 years of experience. I craft it, because I want the AI platform to know where should it be pulling its knowledge from. I'm giving it context. And it can sound very confident in its answer, and it makes sense. But when it comes time to applying it into the real world, I think that we need to question it.
And so I'm actually not putting in very elaborate prompts anymore. I'm creating the context, act like this. This is the problem I have. This is the desired end goal. Here are a couple of things to keep in mind. And then I'm asking follow-ups to make sure that it makes sense. Some of the follow-ups include, what if this changes? How does it impact the answer? Or what are all the things that can go wrong? And what's your proposed mitigation strategy to address the elements that can go wrong? What does failure look like in this scenario? I have a back and forth. I ask the questions so that I'm sure, or I'm as sure as can be, that I'm covering all the blind spots, all the potential vectors of failure that could exist, so that when I receive the final output at the end of my questioning period, I'm confident in the answer I'm getting. But I still have to do my own critical thinking to say, okay, did I really cover everything that I need to cover with AI before proposing this as my solution? And if you cram all of that in the beginning, in the first prompt, I feel like you're losing out.
The AI is going to miss a couple of things. Someone five questions in one breath, like they're gonna forget the first three questions and might only remember the last one and then be like, okay, what were the others again? Like it's gonna, it's not gonna be able, it won't know how to divide its compute power and allocate it all properly. And even if it allocates all of it equally, maybe one section deserved more compute power and more thinking time, so that section won't get that extra time because you've loaded it with so many queries within one prompt, and you'll get a subpar answer on that section. And I feel like as people, if we try to load the LLM to give us the best possible answer in the least amount of back-and-forths, we're losing our ability to think critically.
Now for those that might say, oh well, you know, people used to say that we'll never have a calculator in our pocket so we need to learn mental math, and now we all have cell phones that have a calculator and beyond, right? Yes, that's fine. The impact of a decision if I don't know what 12 times 12 is and my phone is dead, like I can be like, okay, what's 12 times 10? 120 plus 12, and they put some, it's like I could figure it out. And worst case, if my brain is just not there and I can't figure it out 99% of the time, like I'm not going to die if I can't figure out what 12 times 12. So I might be very embarrassed if I'm in a public setting and I can't figure it out. And for those of my wonder if I know what it is, it's 144. But I might just suffer from embarrassment, but I won't, like nothing's going to happen. The economy won't collapse, my company won't collapse, my position in the company may not be at risk, right? 99% of the time.
But in the case of ChatGPT, Grog, Perplexity helping you through a business case, a strategy of this business situation, if you lose the ability to think critically and solely rely more and more on the LLM to give you the answers, then the impact of that decision is a lot greater than quick mental math. So that's the argument and the nuance that I'm bringing is that we're no longer talking about quick math in your head versus quick math in a calculator. It's we're using AI to help us make important decisions. And if we delegate the thinking and the critical thinking part of decision-making to AI, we're losing that ability to do it ourselves and we expose ourselves to the risk of making the wrong decision and having bad or the undesired effects happen.
Now you could say, well, a human could think of all the things and like, you know, before tragedy PT, we had to, you know, cover all of our bases ourselves and we still made mistakes, like blah blah. Yes, but if we made a mistake in the strategy, we went with a strategy that did not work, like there was some thinking involved. Unless like the company didn't think, and then that's just like they're playing with luck. If they didn't choose to think about, you know, think about their strategy before going with it, that's just gambling. But for those that made a decision on a strategy and it didn't work out, like there was a rationale and a thinking exercise that was done. They just went in the wrong direction, so you can like point it back to something versus if you give it to AI, you can't really point it back to anything if it goes wrong.
That's one. Two, you're also assuming when you're giving it to AI, you're putting specific inputs, but the world is dynamic, shifting, changing, and constantly moving. So the AI is giving you an answer based on that snapshot of inputs. I think, I don't think we're gonna get to a place where AI can process all the moving parts happening in the world and provide us with an answer until we get quantum computing in the hands of everyone, which is a pretty long time away. Until then, there are so many things that have to happen in the real world that the AI just doesn't have visibility on and capacity to understand all those moving parts and inputs. Even if it gets processed things quicker, it could process more things and at a quicker speed than humans, it cannot interpret them in the same way and in the way that makes sense to humans.
I don't know if that part makes sense, but humans make, AI makes logical decisions in a logical manner. It can only use logic because that is all it has. But humans make emotional decisions that they rationalize after. And that's the disconnect. Like AI won't solve all of our problems. AI can cure cancer, can find the right vaccines and all that, but that's a mechanical problem. There’s a clear cause in the body that is creating a problem in the body, and you need to address that cause. And if you address the cause, then the problem disappears. So AI can, like, there's a constraint. But when you're talking about the real world, the constraints are huge, meaning that there are almost no constraints. There are so many moving parts that AI won't, AI is going to help a thousand percent. I believe AI can help. But the piece that's important is that the user, I feel, needs to retain their ability to think critically. If they want to build interesting things, right? Like if AI is going to be in the hands of everyone, it's already in the hands of everyone. So the bar of what is good has now been elevated, but everyone's going to be there and everyone probably already is there. Actually, no, not everyone is there right now, but everyone's going to be there.
What good looks like is going to be greater than what it is today. But when everyone's at that level of good, then it'll just be good. And so if you want to be great, or if you want to build something that's great and stand out, those that use AI while retaining and sharpening their critical thinking skills, those people are the ones that are going to excel, in my opinion.