Subscribe
    Subscribe to The Art & Science of Complex Sales

    What Exactly Is AI? Here’s What You Need to Know

    New Call-to-action

    You don’t have to work in the world of technology to know that technology is undergoing a massive shift thanks to recent advances in Artificial Intelligence (AI). Almost every major platform, from Google to Microsoft to Facebook has launched multiple new AI-powered tools, and many smaller technology companies have followed suit.

    Everyone is talking about it, trying it out, publishing think pieces about it, and anxiously or eagerly waiting to see how it will change the world. But do we actually know what AI is, exactly? Many don’t.

    Why It’s Hard to Define “AI”

    If you ask a hundred people to define AI, you’ll get a hundred different definitions. In fact, if you ask AI to define AI, you’ll get a different answer depending on what data set the particular AI engine is using.
    The reason for this is that there is no one clear, definite definition. It’s not a legal term or a scientific term; it’s just words that we collectively choose to apply to certain types of technology. Also, we’ve applied it inconsistently. Even computer scientists don’t all agree on a definition.

    Furthermore, there have been many versions of “AI” before those generating so much attention right now. Computers, when they first came out were called “artificial intelligence.” Algorithms used to manage social media fields are a form of artificial intelligence. Auto-complete on your phone is a form of artificial intelligence. Transcription software uses forms of AI to understand voice and convert it to text. In short, “AI” is hardly new and hardly uniform in its appearance and usage.

    One definition of AI is that it’s the science of designing computer systems that can complete tasks that historically needed humans to do them. But we’ve been designing technologies to do things for us since the first time someone picked up a stick and poked at a termite mound with it instead of with a finger. So, this definition is not very helpful.
    A better definition (for now,) in my opinion, avoids comparison with human intelligence and focuses on the features of AI: They are machine-based systems that create content or other outputs that mimic human outputs, using algorithms with access to extremely large datasets.

    Some types of AI incorporate “machine learning” that enables them to learn from each set of inferences they make in order to make better inferences the next time, much the way that a social media algorithm “learns” what you like based on what ads you watch and what videos you click through on. Other forms of AI rely on neural networks that “learn” by systems of rewards and consequences in a similar way to the human brain. Currently, however, the type of AI that is getting the most attention is a specific type called a Large Language Model (LLM).

    What is a Large Language Model?

    Large language models are a specific type of generative AI that produces human-like language-based content in response to queries and prompts. LLMs include technologies like OpenAI's ChapGPT. They are impressive in their ability to generate seemingly intelligent answers to human questions in natural-sounding language and to carry on apparently human conversations. They can produce academic papers, essays, poems, plays, and even novels. Many of them have even passed the classic Turing Test.

    But what, exactly, are they? At their core, they’re simply sophisticated algorithms with access to extremely large databases. To generate human-like language-based content, they use “learning” from massive language data sets to “guess” the most likely arrangement of words that will provide a suitable response to a query or prompt. They are similar to a very sophisticated auto-complete program, guessing the next word you want them to say and then saying it. They do not understand the meaning of the content they’re producing or the conversation they’re participating in; they are simply “guessing” at the right combination of words to sound intelligible.

    When LLMs Are Useful (and when they are not)

    The output of LLMs can be very useful, especially when summarizing large amounts of data or iterating multiple ideas and concepts. At Membrain, we frequently use it to summarize podcasts and other content so that our teams can focus on bigger, more strategic tasks. Similarly, a study by BCG found that in some contexts, using AI to ideate product innovation increased the productivity of individual consultants by roughly 40%.

    However, this capability does not come without caveats. The outputs provided by an LLM can be misleading because the algorithms will always be confident, but they will not always be right. Because it has no actual self-consciousness, an LLM not only doesn’t know when it’s wrong, it doesn’t care. It simply produces the output it thinks is the one that is desired by the user. As a result, it is just as confident of its outputs when it is dead wrong as it is when it is right.

    Computer scientists refer to the phenomenon of AI being confidently wrong as “hallucinations.” Google has admitted that they have no solution because their AI search tools sometimes give dead wrong information. For instance, as of this writing, one user asked Google’s AI chatbot how to identify edible mushrooms and was told that you can identify edible mushrooms by tasting them, and if they taste good, they’re “probably” edible. This is dangerously wrong information, delivered with a great deal of confidence, at the top of a search engine result produced by one of the world’s largest companies. It’s a demonstration of how badly wrong things can go if you don’t have an actual human intelligence guiding matters.

    Screenshot 2024-05-29 at 10.22.59

    Additionally, the same BCG study cited above showed that while LLMs are very good at ideating new ideas, they are bad at diversity of thought. An “AI” like ChatGPT-4 will always return roughly the same answer to the same query, reducing the potential for multiple points of view or diversity of creativity. Additionally, according to that study, they are currently very bad at analyzing business problems and developing solutions.

    This is because although they are hyped as artificial “intelligence,” these algorithms are not actually intelligent (yet.) They don’t understand the problems they’re analyzing or the impacts of the information they confidently share. Though they are very, very good at guessing, they are still just guessing.

    Finally, a large language model’s performance will always be based on the quality of the data it has been trained on and the intent of its creator. For this reason, although they have many potential applications within b2b sales, it is unlikely that AI tools will transform sales performance, except when integrated into a larger sales strategy.

    In the next couple of blog articles on this topic, I’ll explore some outstanding potential uses for AI in complex sales, some downright bad uses that are proliferating, and how we are (and are not) embracing AI at Membrain. 

    Subscribe
    George Brontén
    Published May 29, 2024
    By George Brontén

    George is the founder & CEO of Membrain, the Sales Enablement CRM that makes it easy to execute your sales strategy. A life-long entrepreneur with 20 years of experience in the software space and a passion for sales and marketing. With the life motto "Don't settle for mainstream", he is always looking for new ways to achieve improved business results using innovative software, skills, and processes. George is also the author of the book Stop Killing Deals and the host of the Stop Killing Deals webinar and podcast series.

    Find out more about George Brontén on LinkedIn