Klara and the Sun#
What follows is a fragment of a journal entry I wrote on February 24th 2023.
I just finished reading Klara and the Sun, by Kazuo Ishiguro. My first impression is that it’s one of the best science fiction books I’ve ever read, and definitely one of my favorites. As my dad once suggested when we were discussing Interstellar, good science fiction presents us with a world in which an aspect of science or technology that is now merely speculative is fictionally confirmed, not so much as a way to speculate about how cool or fancy our gadgets might become, but as a new mirror that we might use to learn something about what it means to be human. To me, Interstellar makes one of the most beautifully nerdy points ever made: that love and gravity are the only two things capable of traveling back in time.
Klara and the Sun explores AI, consciousness and love in a way that is both deeply touching and reflective of a nuanced understanding of the most recent technological advances and the philosophical debate around them. The book skillfully presents some topics as a societal backdrop, like the human fear of not being the most intelligent beings and our treatment of human-like machines; the future of work and the possibility of automation disrupting white-collar jobs for the first time; genetic editing and its impact on class stratification, friendship and parenting.
However, it seems to me that the main topic it explores is consciousness in humans and machines: Can machines be sentient? Do they need to be sentient in the same way we are to be fully sentient? Is the Turing test proof of intelligence, or proof of the imitation of intelligence, and what is the difference? Is human consciousness indicative of there being something bigger, or is it an unintentional byproduct of our neurological development? Is religious behavior simply faulty logic inherent to human nature, or is it actually tapping into a subtler, perhaps even superior, bigger-than-logic awareness? Is it possible for a machine, even if by imitation of human behavior or in order to optimize a reinforcement metric, to internally experience and externally display more kindness, hope, selflessness or irrational faith than a human? These questions are barely, if ever, posed literally in the book, but they are the ones that have stayed with me the most.
I also found that Ishiguro’s portrayal of Klara’s perception, emotions and thoughts was one of the most fascinating elements of the novel. The way Klara describes textures and shapes and often confuses objects that humans wouldn’t; the way she develops purely descriptive names for new things that are awkwardly just a tiny bit off, or draws on her very limited experience to come up with far-fetched yet accurate metaphors; the way she associates disjointed memories; the way she malfunctions; the way she becomes capable of actions that seem to contradict general instructions when she judges them to serve a broader purpose… All these behaviors are incredibly reminiscent of what we’ve seen in deep neural nets in the last few years—especially, and perhaps most shockingly, with the advances in language technology and generative models that have happened in the year or so since the book came out. And, with Ishiguro’s signature use of first-person narrators of whose unreliability the reader is more aware of than the narrator itself, and his understated, subtle writing that leaves a lot for the reader to decipher, it really is the most realistic and immersive experience of the latent space I’ve had so far.
Interestingly, by making us see love, hope, kindness and self-sacrifice from the point of view of a machine, Ishiguro makes us reflect on how what those things have to do with being human. What causes Klara to experience them, what algorithmic purpose do they serve? What are we humans besides similar algorithms implemented on a different material substrate?
Another thought that came to mind: so what if machines are also capable of things we believed to be the exclusive realm of humankind? Why do we need an exclusive private monopoly over so many qualities? We aren’t only what only we are. Cats aren’t any less cat-like, or any less worthy of existence for that matter, because other animals also hunt, eat, dream or form bonds with people. Maybe we just have a certain attachment to what we think it means to be human, and the sense of value we derive from it, that needs some adjusting.