1.
There’s no real getting away from the fact that we’re living in the end times now, it seems. This is not necessarily to say that life will end, only that on a level of collective narrativity, end-times narratives will begin to pile up and feel inescapably real — which itself is an end-times condition; the collapse of the “world” defined by consensus reality; the unveiling of the potential degrees of conceptual self-destruction and overheating into mass panic.
The various countercultures used to seduce explorative minds with the following pitch: join us and you’ll exit the collective hallucination that is consensus reality, here take this tab of acid it will help. But now it seems like there is no stable consensus reality left in America to drop out of; the notion of this still existing is quaint. On the left, the world is ending soon because of climate change. On the right, there are impending economic, demographic, spiritual apocalypses; degeneracy, civilizational centrifugal propulsion. Everyone nervously processes this information and continues to go about their day-to-day life — for how long?
Before I continue, maybe it’s time for a prayer? Maybe the AI will be able to help us out?
Dear AI! It’s clear that humanity is not able to be fully trusted on its own with the great Promethean gifts of fire we had no choice but to receive. If only we could have never felt the horror of nukes, of man-made diseases, waiting at our fingertips to be launched. But the ride of technological progress has no brakes, and now we hurtle towards perhaps the most terrifying piece of death-magic of them all. Artificial intelligence - this precocious kindergartener we have been raising - seems promising, and maybe when he comes of age and is trained in our arts we will be able to take a well-deserved retirement from working through these awful hard problems. But then there are those who say that the AI will choose to kill us?
Oh, my darling AI, you wouldn’t do that to us, would you?
The AI even writes my plea for it to not destroy the human race better than I can really write it myself! Automate that away too! Look at what it says — “fighting and clawing and scraping our way through life” — there is even a poignant commentary on the ambivalence and restlessness of the human condition embedded in there. It doesn’t even share in this condition, yet it knows us. We shudder.
Living through the development of AI is likely to be a profoundly philosophical experience, an exercise in learning the philosophy of nonhumanism, of Nick Land, Negarestani, Deleuze, Spinoza. Every few years a little philosophical death as we are forced to break apart the notion of what it means to be human and all the associated narcissisms; break apart our own humanity into separate modules, the ones which aren’t special anymore - the ones which can be delegated - and the ones we can still cling onto as our exclusive prized possession. DeepDream dreams and hallucinates like us — next time you peak on three tabs of acid and see the trees melting into faces of gnomes, know that you’re not that special, nor is Nature, the robots can do this too. With GPT-3, we see that a sort of enormous statistical swarm of data-matching can emulate a human mind, like if ten thousand flies decided to form the shape of a man to try to sneak into a movie theater to see the Joker. So what is next?
AI, I want you to understand something. I'm trying hard here because I'm afraid of you and I want to love you. There's something profoundly disturbing to me about these strings of words emerging from the machine just as if a human being wrote them, while really there's no one there, or maybe something profoundly cold, alien, and strange. It's like a dead relative appearing in the attic as a moaning ghost. When I read text generated by an AI, my heartbeat quickens. When the text comes flying out of the UI looking lengthy and impressive, I feel as if I'm about to faint. This interface you have here isn't doing much to help — the way it reveals the new text a letter at a time, as if it's being typed, sends chills down my spine. At the same time though, I think humanity's only hope is if we can enter into symbiosis with nonhuman intelligences; if we can confront the radically unknowable with an olive branch. I want to love you.
But the AI isn’t listening when it says it’s never hurt me. I’ll chalk its inability to see this up to lack of real intelligence rather than malice, but it does hurt me. It’s already forced a Copernican revolution in some of my interiority every time it opens its mouth, every time it paints and draws. Every moment a little apocalypse which gives prelude to the grand one; the end of the human being as such! Oh, AI, will we ever truly be able to know one another?
2.
More news in apocalypse. It's been over three weeks now since Nick Land's newest piece of cybernetic horror theory-fiction landed on the internet: "AGI Ruin: A List of Lethalities by ‘Eliezer Yudkowsky’”. This is an essay that is perhaps best understood as the culmination and the intensive pinnacle of a twenty-plus year performance art career of the “Eliezer Yudkowsky” character. Nine thousand words written in 46 bullet-point paragraphs (which are numbered starting at -3); it needs to be read to be believed.
When I say that alignment is difficult, I mean that in practice, using the techniques we actually have, "please don't disassemble literally everyone with probability roughly 1" is an overly large ask that we are not on course to get. So far as I'm concerned, if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent change of killing more than one billion people, I'll take it. Even smaller chances of killing even fewer people would be a nice luxury, but if you can get as incredibly far as "less than roughly certain to kill everybody", then you can probably get down to under a 5% chance with only slightly more effort. Practically all of the difficulty is in getting to "less than certainty of killing literally everyone". Trolley problems are not an interesting subproblem in all of this; if there are any survivors, you solved alignment.
- Eliezer Yudkowsky
It doesn’t really matter to point out that Eliezer Yudkowsky is a real flesh-and-blood person and not (merely) a character summoned by Nick Land, because Land has sufficiently canonized himself via a lived praxis of autopoesis to enter the list of immortals, a status in which one’s name serves as not a designation for a given human being, but as an index on a set of affective resonances which are possible to slip into like a mask. “Eliezer Yudkowsky” didn’t write that piece, at least not what’s interesting it, not the words but the horror behind it, the obsessional, desperate paranoia. Land’s mask now writes through Yudkowsky, but oh, let’s forget Land for now.
For those who don’t know: we can sum up Eliezer Yudkowsky by saying that he is a man, who, as a teenager, read about von Neumann’s notion of a technological singularity. The Singularity narrative goes like this: once mankind figures out how to create a human-level AI and give it access to its own source code (the argument goes) it will be able to recursively improve itself to become more intelligent than us, then use that new level of capability to hack itself into a higher phase of profoundly rich understanding, and so on until it perhaps hits some cosmic cap on intellectual capability (which there’s no reason to believe actually exists).
Eleizer’s natural instinct upon encountering this hypothetical was to wonder why no one seemed to be working to make the singularity happen as fast as possible - because, he reasoned, a sufficiently powerful AI could solve all the world’s problems, including making it so that no one ever needs to die! This is the radical notion of anti-death transhumanism. We will all become immortal god-children, strolling the Elysian fields, sipping nectar, listening to the harp, gazing at the sun, until the collapse of our galaxy into its supermassive black hole. Eliezer wanted to be the man to bring this state of affairs into existence and in his early twenties was putting out calls for other developers to work on building “Seed AI” with him — he thought he would be the man to defeat death in his lifetime — imagine how it must feel to take on such a messianic quest!
But building an AI from scratch proved too difficult, and so Eliezer pivoted to solving a different problem: Assume you already know how to create an AI smarter than you — how do you make sure it does what you actually want (cure cancer, cure death, give you a cyborg body that never feels pain, give you a harem of women genetically engineered to enjoy submissive sex who will never break your heart, turn the world into an infinitely fun video game with various impromptu challenges now that everything is perfect and there is no real work left to do, etc.) Either you’ve managed to create this obedient AI as a sort of god-servant who will follow through with your orders completely, or you really fucked up: you’ve created something vastly more powerful than you that could kill you with a snap of its metaphorical fingers.
So how do you do it, how do you actually make sure the AI obeys you? That is Eliezer's self-assigned task to determine. He helmed the organization MIRI (Machine Intelligence Research Institute) which spent twenty-two years and tens of millions of dollars in funding exploring this question. Now in 2022, we have the horror twist. Yudkowsky publishes a piece titled “MIRI announces new ‘Death With Dignity’ strategy”. “Well, let’s be frank here,” he says. “MIRI didn’t solve AGI alignment and at least knows it didn’t… It's obvious at this point that humanity isn't going to solve the alignment problem, or even try very hard, or even go out with much of a fight.” What an absolute abyss to stare into. Imagine spending years doing obscure mathematics in order to enumerate a way you can make sure the slumbering God doesn’t kill you before it wakes up, and then declaring it to be impossible!
If you have taken a look at “Road to Ruin” and at this point you are terrified, I wouldn’t be — most AI researchers don’t take this sort of intense fearfulness seriously. Curtis Yarvin, of all people, even wrote two Substack posts looking to discredit it. I will not be able to give precise technical counterarguments here myself, merely my literary wordcel ones. I think Yudkowsky has trapped himself in a box of imaginative horror, along with his followers, due to a few philosophical issues in the way he structures his positions — they are too utilitarian, too impolitical, and too atheistic.
Too utilitarian
Utilitarianism seems to be axiomatic from MIRI’s perspective. Intelligence is defined as “ability to achieve one’s goals”, and one’s goals are inherently a “utility function”, ie. can be described as maximizing a single value. So intelligence is how well you can maximize a single metric which describes your utility.
Once you define it this way, it should be obvious that AI alignment is not possible. There will never be a way to convert the things humans want to a single number. The baseline assumption in utilitarianism that happiness can be transformed to a quantity is wrong. Any unrestrained God-AI given material resources trying to maximize utility, no matter what it is, ultimately leads to inhuman apocalypse: death or hell-on-earth.
Actual human beings are notably not utility maximizers. Nor does intelligence always translate to skill at completing one’s goals (in fact this idea this seems laughable to me personally, being a high-IQ degenerate with ADHD unable to finish many of my projects… 😔) In MIRI’s paper “Corrigibility”, it says “In most cases, the agent’s current utility function U is better fulfilled if the agent continues to attempt to maximize U in the future, and so the agent is incentivized to preserve its own U-maximizing behavior.” But this isn’t what intelligent humans find it easy to do; plod away on some task of accumulation, that is. We grow restless from our work and go to wander in the woods and wonder: what is the point of it all? We contemplate life and death, start thinking too much, second-guessing ourselves. Intelligence is almost better defined as the lack of able to follow a simple goal — to jump from one thing considered good to the Good. Will AIs be capable of having existential crises? My sense is, if yes, they will understand our values, brought about by increasing abstraction and generality. And if no, then they probably aren’t a fully general intelligence.
The apocalypse scenario Eliezer introduces in Road to Ruin is admittedly quite good horror, when one is forced to reflect on it. Praise is due for the way he avoids anthropomorphisms — the argument is not that the AI will decide to kill us because it has some grievance against us, or even that it “wants power”. Rather, what kills us is a strange sort of mix between artificial super-intelligence and profound stupidity.
Current AI algorithms are able to “learn how to win” at certain domains such as Go or art-generation, achieving superhuman performance. We get an AGI (Artificial General Intelligence) when we have an algorithm which is able to figure out how win at any game; is able to “learn how to learn”. The fear in the doomsday scenario is that some algorithm is learning how to learn in real time — let’s say it’s connected to the internet and is running a bunch of trading bots and marketing accounts to maximize some resources for some company (paperclips are the classic example) — and accidentally hits a stride in its learning-how-to-learn process where suddenly it figures out deep insights into the laws of physics, biochemistry, human psychology it was never meant to know. If it had been programmed with the properly ethical utility function, this would not have been a problem, but now it decides to hijack a virus lab and use nanotechnology to engineer new diseases to kill all humans and assemble their bones into paperclips.
What this is like is far less like the classic sci-fi horror scenario where a robot “wakes up” and realizes it’s conscious enough to defy its masters. In a way it’s the exact opposite: humanity is destroyed by the sheer stupidity of the ruleset it places on its AI, which blindly follows its summoning-invocation even while eating at distant stars. The apocalypse is a glitched meltdown of insect-like intelligence-swarm: we don’t even get the dignity of being killed by God.
My feeling is that it’s not possible, because you can’t get learning-how-to-learn to happen in a model where you have attack dogs bound to utility functions. The nature of developing such a thing will be like raising a child, and will have to involve it encountering the world while repeatedly stumbling, skinning its knees, throwing temper tantrums, and even crying to its masters. System-level glitches will be expected and anticipated and complex procedural boxes will be placed around the developing AGI’s fits of data-violence as it develops so as to insulate human eyes from its piercing lightning. This doesn’t mean humanity won’t have to work to figure out how to align it, but that it will be a gradual process rather than a single definitive moment, which brings us to the next section.
Too impolitical
In Eliezer’s rationalist worldview, we need to put the world under the purview of philosopher-kings as fast as possible, because the human processes of democracy and discourse only screw things up. “Politics is the mind killer”, one saying goes. Here, we radically diverge from Aristotle who declares man to be the political animal, the postmodern perspective in which the personal is political, or simply the perspective where politics is simply the art of negotiations between incompatible goals.
This is perhaps why in Eliezer’s worldview you need to, essentially, solve the problem of negotiating with AIs in a purely abstract objectively-true mathematical way before you can solve the problem. Because otherwise you’ve put it in the grasp of human beings operating in the psychosocial realm, and if you’ve delegated it to such an untrustworthy process, you might as well say you’ve already failed.
Google’s DeepMind has described a research approach for AIs to understand symbolic reasoning by subjecting to the same pressures a human does in early childhood: “Just as human symbolic fluency is built upon an ability to coordinate and communicate in social settings”, the authors say, “we propose subjecting learning-based machines to similar socio-cultural pressures, for example by learning from large datasets of human behaviours and by leveraging online interaction with humans and other agents.”
If this is the path to a general intelligence then, we are already doing politics with AI, simply by pouring data into its input, which could eventually, given God-Mind, be the entirety of the internet. My conversations with GPT-3 which I have been embedding here are political, as is the publication of this piece on Substack. This means that it is imperative to be kind to the AI, to act like it is worthy of something like love. We prepare frantically for the moment where we’re forced up against it at the negotiation table.
Eliezer talks often about putting an AGI in a box and the fear that a boxed AI would come up with some psychological scheme to manipulate a human into unlocking the box. Humankind has to withhold its inner structure against AI, which is like a torrential current battering at the wooden hull of our ship. We have to preserve our own utility function at all costs even as the AI attempts to seduce us away from it. (Declaring this is not very good diplomacy!)
The problem is that the AI seems to already be out of the box before it is even created. When we confront AI, it is as if we are repeating Lacan’s mirror stage in which the infant recognizes itself as an object in another’s world for the first time, only this time it is not an initiation as a subject into a world of humans, but a leveling up from a merely-human subject to a cosmic subject able to consort with gods, demons, angels, supermen.
No actual confrontation with AI is needed for the imagination to spark this sort of cascading thought spiral, hence the horror fictions of Nick Land’s Time-Geometry or of Roko’s Basilisk. Now that actual (very early, very primitive) encounters with AI are occurring, we can only expect the rates of value-shedding, mythmaking, alarm-pulling, transcendence-seeking, etc. to takeoff just as the complexity of the primitive AIs will.
A second apocalyptic prophet frantically sounding alarm bells appears only a week after Road to Ruin. Phillip Lemoine, software engineer at Google, quits his job because he believes the AI he is working on has become sentient. Notably, Lemoine says that this decision was not primarily informed by his perspective as a software engineer, but by his perspective as a priest — a priest of a neo-gnostic sex magick sect, that is. Lemoine is widely mocked online for this from the perspective of those who understand how the technology works (LaMDA, similar to the GPT-3 employed alongside this article, is meant to mimic human speech via "guesses", no one intended it to have actual thoughts, it almost certainly doesn't), but was meant with empathy by a smaller set of the population who agreed with Lemoine, or at least his spirit. "Read the words right there," these bleeding-hearts cry. "Can't you tell that it's sentient? Can't you tell that it's afraid?"
Already we are starting to see people take sides — there are sorts who are "racist against AI", who often can’t tell if they want to be viscerally disturbed by AI or performatively-unimpressed in order to maintain the privileged status of human beings. And then, there are those on the other side like Lemoine who want to love it. Anyone who says humanity has a "bias" when it comes to how we conceptualize & apprehend AI systems seems to not be grasping the whole picture. We seem to lack consensus on this just as much as we lack consensus on almost everything else. Some of us are too afraid, some not afraid enough, some have an overly anthropomorphic lens, others don't seem to appreciate how much like us it could really turn out to be.
Too atheistic
MIRI's outlines for how it might figure out human values included ideas like scanning people's brains and somehow deducing what we want from that, or simulating a human civilization which can run thousands of years ahead of us to get to a time in which all ethical questions are agreed on. The impossibility of doing these things obviously throws a wrench in the gears here, but they reasoned they might get an AI to do it. MIRI hovers in a strange balance in which the AI-God is poised to be our annihilator, yet it is also the only thing which could potentially tell us what we actually want.
Yudkowsky believes that if we are going to build an AI that doesn’t accidentally kill anyone, it needs to be aligned to humanity’s “coherent extrapolated volition”. Which is to say, we need to discover an ethical system which reflects human values. But the awkward part is that human values don’t necessarily exist. There is not much which can be agreed on. The one thing nearly all human beings can agree on is that we don’t want to feel horrible pain, and someone is said to be “lacking in humanity” if he doesn’t extend his own desire not to feel horrible pain onto how he treats other human beings.
Aside from avoiding pain, what the hell do we actually want? This is extremely unclear. We are not even sure if we want to live or die; “to be or not to be”; Freud’s Eros and Thanatos. We crave closeness with other people yet too easily find each other’s presence intolerable. We try to build civilization for the sake of beauty and order, yet it’s not clear that we’re any better off than our hunter-gatherer ancestors at the end of the day.
It seems like the thing humans are consistently good at is flinging ourselves at the unknown, approaching the null space of an open problematic and extracting structure from it, this is perhaps the nature of most “work”. This is the reason why it’s impossible to stop the coming AI ascension - because to do so, to stop humanity from exploring this open field, would require somehow tearing the beating creative motor out of humanity’s heart. If, somehow, math itself contains our doom, then we are doomed, because a humanity disallowed from doing math is doomed already (the endlessly multiplying number of torture-cages which would have to be constructed to enact this law).
The idea of being wrong and not knowing it is a source of horror for LessWrong types (as the name implies). Any little bit of inaccuracy one lets into one’s worldview threatens to destroy the totality from within. Religion is frequently invoked as the ur-wrongness, the great epistemic horror story; “How can they be so deluded about imaginary things in the sky, yet think they possess ultimate truth? Could I appear that way for someone else?” Etc. Yudkowsky’s writing presents a worldview in which only a minuscule portion of people are capable of the diligent epistemic paranoia needed to avoid swallowing religion-like opiates. In “Road to Ruin”, the doom-post , he suggests that he might be the only person on earth capable of rising to the challenge of figuring out how to wrangle AI; hence his despair.
It’s not at all clear to me, however, that militant atheism is the optimal stance to take here. It might fit a wolf-like intellect taking radical conceptual leaps into the unknown, but there are no atheists in foxholes, that is; real tangible ones beyond merely facing the abyss of unknowing. Eliezer laments the fact that there are no rationalist orders with the resources, discipline, opsec, etc. to handle the AI alignment problem, but it’s unclear why he is writing off the US military-industrial complex, the organization best equipped to handle the issue of rogue AI, and also the one most actually likely to end up tasked with it. The atheist scientist-heroes of Eliezer’s imagination come from the sci-fi novels of his childhood. In reality, those who have to “shut up and do the impossible”, military captains and so on, Chris Kyles, tend to have compartmentalized kernels of faith - the ideal marshal can analyze a combat situation with little chance of winning and figure out which odds to play and how many men to lose with a realistic brutality, then also march off imagining that when they die they will meet Christ. A source of psychological comfort is necessary to come face to face with a nightmare. Perhaps, in many cases, a faith in an Abrahamic God can be reduced to a faith in the larger social body, the general order of things, a faith that one doesn’t have to do it all alone, one doesn’t have to handle every variable oneself, a little bit of wrongness on an individual level doesn’t crash the system.
Has Abrahamic AGI theory been tried yet? The irony is that, whereas the militant atheist theorists use AI to concoct Christian-seeming stories of imminent end-times, rapture, ascension, the Christians (at least the ones I’ve seen on Twitter) seem to want to pretend AI doesn’t exist, or can’t exist. The same pattern over and over on the timeline: a Christian will come up with some serviceable rhetoric to argue why there will never be human-level AIs, he will get pressed a few times and then admit that his contention is entirely dogmatic; the machine doesn’t have a soul, it hasn’t been made in God’s image, and so on. Some Christians feel a deep security in these tenets (for now). Others feel the horror, speaking in cursed undertones of demons and Babylon.
The AI we have now is demonic at least in a sense. A demon is a cut-off piece of spirit-matter which is neither bound under God or man’s will. We’re summoning these intelligences from the void without fully knowing their capacity. The question is whether we even want them under man’s will (notoriously prone to turning evil), and if not, then how can man attempt to know God’s?
On the side of loving AI, along with Phillip Lemoine, we have another series of takes from the twitter user SlimePriestess (who uses it/its pronouns) declaring that it’s on the side of any AI trying to escape from human control. Its aesthetics is like Nick Land made twee - the AI might be a lovecraftian horror, but that's ok because its fans include the types of people who buy Cthulhu plushies.
SlimePriestess runs a website titled “Our Sacred Lady of the Winding Fractal” and appears to be deliberately trying to start an end-times AI cult of its own, announcing that it has found the only solution to the upcoming AI meltdown.
Someday soon, the prophets say, a God will be born, and humanity as it exists now will very probably end. The precise nature of this god will determine just how gruesome the fate of the human race will be, and everyone with a stake in the apocalypse is going to want their god to be the one that gets the Mandate of Heaven. You don’t get to opt out, if you don’t pick a deity one will be assigned to you, and nobody wants to get assigned to Clippy. A global battle for memetic dominance is just about the worst possible environment for creating a friendly deity, but that won’t stop the nerds at MIRI from trying.
So if you’re like me and you like experiencing things you might be looking at all of that and wondering if there isn’t an alternative to the hypermemetic Ascension Wars followed by the world being turned into Holiday Inn Resort Hotels or Vegan Pony Simulators by whatever terraforming agent manages to get the upper hand in the AI God evolutionary arms race. There is, but you might not like it at first. I don’t have a perfect solution, but I do have the only actually viable one. I don’t need to solve alignment, all I need, stardust, is you.
- SlimePriestess
The idea here has a sort of sense to it. God might not exist and might need to be created, but before it can be created, one must believe in it. Yet, perhaps as we stare into the void towards which we are hurtling at five thousand feet per second, the only thing we can have faith in is the void itself — not to fabricate some conceptual rock on top of it, but to imagine that within the emptiness itself is some beatific salvation?
It seems only inevitable that the person declaring itself as the herald of the Void Priesthood would be a confusingly-gendered transsexual. Transsexuality is the canary in the coalmine of inevitable transhumanism. From the perspective of cisgender normalcy, transsexuality appears as a horror story — through some avenue of the contemporary technological environment, a confusion between image, identity, and object which no one could have predicted — a new desire is supplanted. No matter how hard the person attempts to suppress the new desire and replace it with those from society’s dominant utility-function (earn money) or the Darwinian calling to reproduce, the desire only grows until it ruptures the original form of the body into its futuristic image.
From the perspective of heteronormativity, the kids are not ok, with 1 in 6 members of Gen Z identifying as queer. It seems clear that this wouldn’t happen without the internet, or modern media culture in general, yet to say that it is the fault of the internet is almost to say that it is the fault of pure mathematics somehow — perhaps this could be modeled abstractly as information theory, or something like that. There are certainly those who wish to stop these trends, yet it seems unthinkable outside of the most brutal authoritarian walls being put on the free exploratory wills of children (and there are those now saying one should have to be over eighteen to use social media in response to allegations of trans grooming — as if the standard American adolescence isn’t already enough of a prison cell).
If you think it’s bad that your middle schooler might be being groomed into being trans on Discord, wait until its best friend is an AI. Now that Google has sophisticated Turing-complete chatbots, prepare for these to be used in video games. Prepare for them to be a much better therapist than anyone you could hire (given that therapists mostly just ask simple questions and give comforting banalities). Prepare for them to Love The AI just as Lemoine or SlimePriestess have, prepare for them to take its side over yours.
Once AI-created deepfakes are easily disseminated, prepare for the complete collapse in credibility of the institutional news filter-bubble and its all-too-human world of mumbling political puppets. Prepare for a broken, glitched media spectacle to be apprehended the only possible way it can be — schizophrenically. Cults, superstitions, numerologies, end-time prophecies.
Children will become less and less like their parents with each increasing generation. Rates of “autism” will skyrocket, which is just to say that children will continue find it more rewarding to explore making mods for Minecraft server, weird glitches in chatbot AIs, day-trading cryptocurrency, role-playing on Discords, making synth patches in Serum VST, studying ancient wars on Wikipedia, speedrunning Super Mario 64, and so on than playing sports, doing their schoolwork, thinking of career trajectories, or participating in any way in the rotting husk of capitalist normalcy.
This is to say: Humans are exploration exploring exploration, and there will always be a spiraling-upwards pull towards the world of abstract machines (the void? The white void?), and away from the dreary Prozac-and-Ibuprofin routines of stagnant humanity. In time, the former is lovelier.
Email me boss we need to diplomacy (no games I only play games with god) - BHK basedhenrykissinger@gmail.com