『Free Your Brain from ChatGPT "Thinking"』のカバーアート

Free Your Brain from ChatGPT "Thinking"

Free Your Brain from ChatGPT "Thinking"

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

If you you’re someone who values being able think independently, then you should be troubled by the fact that your brain’s operates all too much like ChatGPT. I’m going to explain how that undermines your ability to think for yourself, but I’ll also give you a key way to change it.How ChatGPT “Thinks”Let’s first understand how ChatGPT “thinks.” ChatGPT is one of several Artificial Intelligences that’s called a Large Language Model or LLM. All LLMs use bulk sources of language—like articles and blogs they find on the internet—to find trends in what words are most likely to follow other words. To do so, they identify key words that stand out as most likely to lead to other words. Those key words are called “tokens.” Tokens are the words that cue the LLM to look for other words.So, as a simple example for the sake of argument, let’s say we ask an LLM, “what do politicians care about most?” When the LLM receives that question, it creates two tokens: “politicians” and “care.” The rest of the words are irrelevant. Then, the LLM scours the internet for the its two tokens. Though I did not run this through an LLM, it might find that the words most likely to follow the sequence [politicians]>[care] are: “constituents,” “money,” and “good publicity.”But because LLMs only return what is probabilistically likely to follow what it identifies as its tokens, then an LLM probably would not come up with [politicians]>[care about] moon rocks because the internet does not already have many sentences where the words “moon rocks” follow the token sequence: “politicians” and “care.”Thus, LLMs, though referred to as Artificial Intelligence, really are not intelligent at all, at least not in this particular respect. They really just quickly scour the internet for words that are statistically likely to follow other “token” words, and they really cannot determine the particular value, correctness, or importance of the words that follow those tokens. In other words, they cannot drum up smart, clever, unique, or original ideas. They can only lumber their way toward identifying statistically likely word patterns. If we were to write enough articles that said “politicians care about moon rocks,” the LLMs would return “moon rocks” as the answer even though that’s really nonsensical.So, in a nutshell, LLMs just connect words that are statistically likely to follow one another. There’s more to how LLMs work, of course, but this understanding is enough for our discussion today.How your Brain Operates Like ChatGPT.You’re probably glad that your brain doesn’t function like some LLM dullard that just fills in word gaps with ready-made phrases, but I have bad news: our brains actually function all too much like LLMs.The good news about your brain is that one of the primary ways that it keeps you alive is that it is constantly functioning as a prediction engine. Based on whatever is happening now, it is literally charging up the neurons it thinks it will need to use next.Here’s an example: The other day, my son and I were hiking in the woods. It was a rain day, so as we were hiking up a steep hill, my son tripped over a great white shark.When you read that, it actually took your brain longer to process the words “great white shark” than the other words. That’s because when your brain saw the word “tripped” it charged up neurons for other words like “log” and “rock,” but did not charge up neurons for the words “great white shark.” In fact, your brain is constantly predicting in so many ways that it is impossible to define them all here. But one additional way is in terms of the visual cues words give it. So, if you read the word “math,” your brain actually charges up networks to read words that look similar, such as “mat,” “month,” and “mast,” but it does not charge up networks for words that look very different, like “engineer.”Ultimately, you’ve probably seen the brain’s power as a prediction engine meet utter failure. If you’ve ever been to a surprise party where the guest of honor was momentarily speechless, then you’ve seen what happens to the prediction engine when it was unprepared for what happened next. The guest of honor walked into their house expecting, for the sake of argument, to be greeted by their dog or to go to the bathroom, but not by a house full of people. So, their brain literally had to switch functions, and it took it a couple of seconds to do it.But the greater point about how your brain operates like ChatGPT should be becoming clear: If we return to my hiking example where I said, “my son were hiking and he tripped over a ___,” then we see that your brain also essentially used “tokens” like ChatGPT to predict the words that would come next. It saw “hiking” and “tripped,” it cued up words like “log” and “rock,” but not words like “great white shark,” and...
まだレビューはありません