Recently I finally figured out how to get access to ChatGPT on OpenAI - just sign in via Google, then you will be able to start chatting without verifying your phone number, which, in case you don't know, is not possible if it's from China. (However, getting an API key will still require phone number verification, which is unfortunate... but could actually be a blessing in disguise, more on that later.)
And ChatGPT is amazing. AlphaZero was cool, but it has little impact on my daily life or workflow. GPT, or LLM in general, is different. Back in school, I thought about whether NLP/text info retrieval, or computer vision, will have a more meaningful impact on humans, and I was leaning on the former, because NLP is much more expressive and versatile than just matrices of pixels - for example, can you imagine something like Wikipedia, or Google Search, being primarily presented as pictorial info? ("Search by Image" is nice, but what if you have to choose between "Search by Text" and "Search by Image"?) So notions like "A picture is worth a thousand words" are highly misleading (not always wrong of course, depending on the context, but just very non-general). We can use text to describe arbitrarily complex visual patterns (heck, the last resort would be just use matrix notation to specify the pixel info), but try, e.g. writing a text book for any college course using only pictures.
However, there is one major drawback with NLP - that we have many NLs, whereas pictures are highly universal even across cultures and generations. But overall, I'd pick text over pictures any day.
Anyway, the above is not the point of this piece at all. I tried ChatGPT in many different fields of life, from professional, factual, to entertainment. The engine is very nuanced, and as far as I can tell, it understands me. This is eerie, but in a good way. "Advanced technology is indistinguishable from magic", after all.
The point of this piece is, how much should we let it be part of our life, such that the general outcome is optimal? I'm not talking the alarmist talk about AI posing an existential threat in case its interest no longer aligns that of ours, or its being controlled by parties whose interest doesn't align with "the people". I'm talking at a personal level, for health, and mental growth.
That's why I'd like to draw an analogy with transportation. AI is similar to transportation in the following senses:
- They are both the sure trend - there's no way to fight them hoping that they will not be an integral part of a significant portion of human life (and that portion will only increase over time) - just like the Internet. We should embrace them, and surely be grateful for them.
- However, being overly dependent on them will lead to deterioration in our innate capabilities - that's also inevitable, and we can only choose which of the capabilities to keep, and which to let go (and to what extent).
Modern advancement in transportation surely makes the general population less athletic/fit than our ancestors, who mostly rely on foot travel, except for professional athletes, or those who invest significant amount of resources (time included) in physical fitness.
AI will have a similar effect on a large population of us - and that's not even new to advanced AI - simple devices like calculators (or calculator apps), are already making us much more terrible at doing arithmetic in our head; and even though the PR message that the AlphaGo documentary was trying to give was that people (even more people) stay interested in the game, Go will never feel the same - you could say it's a pride thing, the human pride.
And that's not even touching the most human part of our mental abilities - we've long admitted that machines are better than us in doing calculations and logic operations.
Now how about the "creative" aspect of the human mind? Imagination, serendipity, weird dreams that inspire important scientific discovery or artistic production. Can AI one day compose "classical" music on par with the great Bach and Vivaldi? And by what standards do we measure the comparative greatness? Just based on some metric of induced pleasure? Or some metric of acoustic patterns?
These are hard questions, I'm not even trying to give an answer. But there is a pressing question that can have highly operational answers, namely, what specific abilities we want to preserve, and what to just let go. Surely, we can decide mostly on a pragmatic basis, but we may also get further if we could attempt a bit harder to picture the future scenarios of human life. What are the more essential human abilities that will most likely lead us to a better future (assuming we know clearly what "better" means)? What are the "meta" qualities from which other abilities can be derived? Sure, tool-making is said to be a quintessential human skill set (except for ravens? Then I guess we can add a qualifier, such as "advanced tool-making"). But that's actually a complete blackbox notion, what's more important is, What are the upstream qualities that make us so good at making tools, as well as making rapid iterations on those tools that eventually lead to quantum leaps in sophistication and refinement (think what happens to computers in the past 50 years, or the internet in the past 20 years).
Example, if we can invent AlphaZero the "algorithm", then we get AlphaFold as one of its many "apps", and we no longer need the highly specialized human skills of analyzing protein structures, in order to predict what the proteins do, and how. The unfortunate thing is, it's only a relatively small group of educated people who understand the algorithm, and even a much smaller group of elites who have the capacity and resource to innovate at this level. Heck, even for not so advanced technologies, like a regular TV, I don't know exactly how they work to the extent that helps me fix a broken device, not to mention making one.
That's why I think it's pointless to talk about ensuring that the people "own" the AI technology, if the majority of us don't even understand what's going on behind the magic ChatGPT.
But let's assume for the sake of argument, that once we decide something matters a lot, we will have the necessary resources and capacities to acquire that knowledge/skill (which is not too crazy for an assumption), then we first have to decide what those skills are. STEM for sure, but that's a bit too vague.
Let's go back to the transportation analogy for a while. People, after a long while, realized the huge benefit of simply walking, on a regular basis. And then they would start walk more, instead of always sitting in a chair, or in a car seat. And they would benefit from this lifestyle, and they would loop. And it's also known that people who don't drive that much tend to walk more (because we are lazy whenever possible). So not owning a car, or intentionally not relying much on a car even if you own one, goes a long way in helping to sustain a healthy lifestyle. In short, don't make the tech integration too seamless, and don't make its usage too effortless. For example, if you need to go somewhere fast, schedule ahead and take the subway, instead of, for instance, hiring a chauffer who stands by at your doorstep.
And back the OpenAI, initially I was quite annoyed that due to obvious reasons, they can't offer API service to those based in China. But then I realized this is a desirable inconvenience. When I need its assistance, I go to the site and ask questions, and maybe copy the responses to my notes. But with an API key, we can potentially make the UX so "ergonomic" that it becomes a natural extension of our brain (sounds great, right) - whenever you have a mental task, you ask it first instead of thinking on your own - and you like it because most of the time, it gives you better answers than what you could give yourself, even if you do think, and before you know it, you will become a seat potato - I mean your brain sits in a comfy super car (a chair on wheels - a wheelchair) that drives you wherever you want, effortlessly.
And of course the catch is, it is NOT really "a natural extension of your brain", it's a commercial product with tons of trade secrets. (I know, OpenAI just redefines "open", but hey, capitalism works, right?)
And we're not even considering the flaws and bugs of ChatGPT here, even if it were perfect, it could still cause significant harm to our mental wellbeing - I saw some writers demoing how it can serve as your writing pal by doing idea brainstorming for you, and can even do a rewrite of what you've written to make it read better, or shorter, or however you want. And of course, similar features will be taken advantage of by programmers, esp. for those who don't have to design what features to implement in the first place.
You no longer have to walk the walk, and climb the mountains, because modern transportation is faster, and better, on every metric, except for the fact that you don't (get to) do it. And for many things, maybe that's not too bad - for instance, I'm nowadays terrible at doing arithmetic in my head, but it doesn't bother me that much, so in a sense, I've let that one go, for now at least. But for writers, and programmers, and for fuck's sake, artists, what if the tool that we use eventually removes the need for us? Maybe not completely, but even if it's at 30%, then soon I expect it to be at 50%, 70%, I know this is perhaps a slippery slope fallacy, but you don't know for sure that it won't happen, either.
AI will kill jobs for sure, but again, some jobs are meant to be killed, just not all. If nobody plays Go professionally anymore, because they are finally sick of being crushed again and again by a machine, then I'd say we as a species would be mostly OK, we can still play it if we want to, just for fun, or for mental health, just as we take walks for physical (and mental) health. But how about scientific research, math, engineering, art, philosophy, economics?
I mean, better tools will be invented whenever possible, and we can continue the activity even if AI has reached superhuman level, again, just for fun and health. But it will never be the same - it's the singularity, isn't it? That something has overtaken us, even if it's us who created that thing in the first place. Is human being just a bootstrapping step toward a more thorough, and scalable, form of intelligence? I don't know, it's a bit metaphysical.
If you look at the track record (i.e. the evolution), our ancestors already started showing those traits - humans and birds share a common ancestor (all species do), but some branch chose to evolve into a flying-capable physique, whereas we went through an intellectual journey to invent flying tools, and it was great, for it went way beyond our atmosphere and towards the edge of the solar system. So which is a better path? I'd argue that, if flying per se is what we're after, than we humans have absolutely done it right (birds are not wrong, but they're stuck at a local maximum for the forseeable future). However, if what we're after is flying as an innate ability and thus a birthright as well as a way of personal empowerment, then humans have utterly failed - we can't fly, after all.
And that links to the common sci-fi theme where future humans give up on their fragile and mortal biological form, and switch to a much more durable, scalable, and limit-free form, and maybe AI is that solution (along with the Internet of course).
But many would question, by that time, whether humans as we know it, still exist. Well, that's metaphysical, again. But, at the very least, it is possible that we might still not have figured out the true nature of our existence (and the destity of our journey of evolution), and maybe thoroughly exploring the AI space is part of that self-discovery. Is biology our essence, or what's encoded in the biological hardware? Again, is our body merely a bootstrapping phase? Is singularity our metamorphosis?
But fortunately, we don't have to worry about these hard questions yet, as we are not even close to the singularity. We still agree that there're a lot of hard problems solve, and that requires a lot of smart people to work very hard for a very long time; and that's why we need to pay attention to our health, both physical and mental, and in turn, why I wanted to draw this analogy with transportation - we do need to take advantage of transportation for efficiency, but at the same time, it is necessary to refuse to use it in certain scenarios and to some sufficient extent, for the sake of something much more important than reaching a location quickly.
If we believe that our brain is still yet to take on greater challenges, and that more important missions in the future await us, then we similarly have to take measures to preserve, and enhance our mental capabilities, by intentionally refusing to rely on AI in certain critical scenarios where our future endeavors in vital fields will most likely depend upon.
For now, we pretty much agree that things like creativity, imagination, serendipity, are on that list. But in a more operational definition, I think what we still have an edge over AI for the foreseeable future, is on those tasks that are still ill-defined (for example, when there is no well-specified rule set), and maybe even more importantly, those tasks that take a long and complex feedback loop to guide the next steps - AI's greatest advantage over us is still its "speed" (more technically, computational power - scalability and reliability), so if a game move takes years to give back a score for it to learn from, and sometimes that score doesn't even follow the rule, then the current framework might not be able to beat humans.
But I'm not just giving AI a hard time for its own sake - a lot of real human endeavors are actually like that. Many important tasks simply do not give us the chance to train on a huge set of high-quality data, or conduct rapid self-play non-stop until we become good at it. "One-shot", "few shot" learning is still the only realistic mode of operation available to us. That's why the human history, or the history of an individual, are full of non-reproducible factors, that we generally refer to as luck and accidents.
AI will for sure be immensely impactful, maybe the most impactful of all technologies so far. And the question is, are we (I mean "the people"), ready to live (along) with that destiny, and make rational life choices for, for lack of a better term, a better future?