The Five Most Key Takeaways from This Blog
- Despite claims from A.I. companies that chatbots are headed to Ph.D-level knowledge in the near-future, many widely available chatbots like ChatGPT are still struggling to get the basics right.
- An example of this is detailed in a Wall Street Journal Article, where a chatbot could not give an accurate answer to the question of how many PGA Tour wins does Tiger Woods have.
- That same article points to the problem of deep knowledge in A.I., where despite being trained on nearly all of the Internet’s publicly available written material, the chatbots still do not seem to have deep or comprehensive knowledge of topics. (There’s maybe an uncomfortable analogy to be made here between the chatbot’s huge consumption of information and humans’ not-quite-as-huge-but-still-quite-huge consumption of information, and the implications of the chatbot still failing at the basics.)
- The problem of deep knowledge can be mitigated, but not outright solved, by several ways, each detailed in that WSJ article. Fine-tuning A.I. models with businesses’ own data is one such avenue. So is custom-building A.I. models from the ground up. Retrieval-augmented generation, where you basically give the A.I. a reference library to prioritize in looking for information to form an answer.
- For business owners, the significance of the problem of deep knowledge is that it points to the reality that chatbots do not have the capacities for understanding and grasping concepts that humans do. At the end of the day, the reason that A.I. prioritizes one bit of information over another is simply because it has been programmed to do so. (E.g., programming a chatbot to select definitions from the Oxford English Dictionary rather than online database of slang Urban Dictionary.)
Going Deep
Worth mentioning here is that the term “deep knowledge” should not be confused with “deep learning“, a term that you may be familiar with.
The former more or less refers to the likelihood that A.I. will give accurate answers and information about any particular topic, such as deep-sea exploration.
The latter refers to a certain type of machine learning that basically models the information-processing center that is the human brain by constructing an algorithmic “neural network” of made up of neuron-like “nodes” that analyze information to create an A.I.’s outcome, typically a prediction.
Why Does A.I. Get It Wrong?
A Silicon Valley type could likely give business owners a list of technical reasons for why A.I. struggles with the problem of deep knowledge.
Some of these may be quite valid, and the addressing of the technical issue at hand may indeed raise the probability of successfully getting an accurate answer from A.I.
However, business owners should temper the promises of Ph.D-like chatbot assistants with a cold splash of common sense about the differences between humans and machines.
If a human were somehow able to consume the entirety of publicly available writing on the Internet, that human would have likely feelings about the correctness or incorrectness of every piece of writing.
The average human has also lived in the world, taking in experiences and forming memories that undergird a sense of what likely could happen and what likely could not happen in the world. (Of course, not everyone’s world-testing sense is of the same caliber).
Training an A.I. system is meant to create that predictive strength of what is and is not true, but this lack of organically living in the world, taking in experiences and memories that, importantly, are attended by emotions that undergird interpretations and beliefs, limits A.I.’s odds of getting even basic common-sense answers correct.
For instance, a post on Twitter that claims that Tiger Woods has 1,000,000 PGA Tour wins would likely elicit a skeptical response from a human reader.
Meanwhile, the A.I. chatbot cannot have emotional feelings and does not possess a lifetime of experience and memories in the world that grants something like a feeling for what the world is and what would and would not happen in our world including how many PGA Tour wins someone could expect to earn.
Instead, that bogus tweet would be just another piece of data to the chatbot, which, as stated, does not have feelings at all.
Other Great GO AI Blog Posts
GO AI the blog offers a combination of information about, analysis of, and editorializing on A.I. technologies of interest to business owners, with especial focus on the impact this tech will have on commerce as a whole.
On a usual week, there are multiple GO AI blog posts going out. Here are some notable recent articles:
- For Businesses and Other Organizations, What Makes a Successful Chatbot?
- IBM Watson vs. ChatGPT vs. Gemini: How Will Each Affect Search Engines?
- Using A.I. to Find Resources for Business Owners
- How Would Restricting Open-Source A.I. Affect Business Owners?
- The EU’s A.I. Act Has Become Law: The Implications for Business Owners (Especially American)
In addition to our GO AI blog, we also have a blog that offers important updates in the world of search engine optimization (SEO), with blog posts like “Google Ends Its Plan to End Third-Party Cookies”.
Recent Comments