home | contact | blog | notes | random | recommendations | colophon | resume
Beyond the Judgement of God. Meltdown: planetary china-syndrome, dissolution of the biosphere into the technosphere, terminal speculative bubble crisis, ultravirus, and revolution stripped of all christian-socialist eschatology (down to its burn-core of crashed security). It is poised to eat your TV, infect your bank account, and hack xenodata from your mitochondria.
Machinic Synthesis. Deleuzoguattarian schizoanalysis comes from the future. It is already engaging with nonlinear nano-engineering runaway in 1972; differentiating molecular or neotropic machineries from molar or entropic aggregates of nonassembled particles; functional connectivity from antiproductive static.
[…]
Converging upon terrestrial meltdown singularity, phase-out culture accelerates through its digitech-heated adaptive landscape, passing through compression thresholds normed to an intensive logistic curve: 1500, 1756, 1884, 1948, 1980, 1996, 2004, 2008, 2010, 2011 …
If you want to peer into the future, you can book a flight to San Francisco International Airport today. You can work your way downtown and watch (mostly) self-driving cars meander, sometimes clumsily, around what initially seems like a small section of urban Boston cut-and-pasted into the most naturally beautiful place you’ve ever been in your life (in the midst of incongruous, endless five-over-one sprawl); you can ride in one and be totally astounded for 20 minutes, and then mostly bored. You can meet people you’ve followed for years on the site formerly known as Twitter, and whom you always had difficulty imagining as having a corporeal form (as they say, in SF, Twitter is real life). You can see hundreds of peculiar SaaS and cloud compute advertisements, few less than three layers of abstraction removed from any concrete application, and most betraying a level of unseriousness and psychological unsophistication and aesthetic immaturity that you think ought to be incompatible with controlling a couple million dollars of capital, let alone a couple billion. You can visit a place where an entire city seems to be implicitly under the Chatham House Rule, if only because the sentiments many people will ~openly express are so extreme and appalling that it’s incumbent on you to protect the speakers’ reputations from themselves.
You can walk by the buildings, the mere normal office buildings, where the inhabitants might or might not be building God, or at least (much more credibly) an alien species made superficially in the image of man, and contemplate the lack of barbed wire and heavily armed guards. You can meet the individuals whose values, competencies, and luck are steering the development & deployment of transformative AI –1 i.e., the rest of everything that happens forever — and observe that they’re essentially normal people, or at least not of a fundamentally different taxon than you (perhaps to your relief, or perhaps not). You may even argue with them about the finer details of the Situation, and most will gladly hear you out and seemingly not take offense, and prove themselves to be decent in many other little ways.
You can readily notice the near-total stratification (which you had been warned of) of downtown San Francisco between ~software engineers and the service worker caste, and idly wonder which you’ll fall into in ten or twenty years; you can retreat to Berkeley, where “software engineers” is replaced with “wealthy students from abroad,” or one of the poorer and more heterogeneous suburban or exurban areas of the Bay.
coldhealing has a tweet that goes like this: “my vision of new york is five boroughs filled entirely with laptop job elite galavanting around the playground city served by an underclass that commutes in from tiny five-over-ones in hoboken”. Whether they realized it or not, this post was about SF, the downtown area of which is probably the most thoroughly powerwashed place I’ve ever been, including Washington, D.C., Wall Street, and Boston.
Californians love their cars, and pedestrians and drivers alike are subservient to the automobile in a way that urban East Coast residents mostly are not (though this is in large part a function of distance & sprawl). You get the sense that the real technocapital demon reaching through time to ensure its own survival is the specter of Henry Ford haunting America, many decades later. Nothing human makes it out of the near future, but compact SUVs probably will.
This is to say nothing of the vagrants in various states of disrepair sunken into the corners of almost every block, nearly wherever you go. They seem more listless than their counterparts in New York City did, and seem ensnared by the otherwise pristine facets and metal thorns of the edifice of 22nd-century(sic) capital, like it is eating them. Every time I see one I feel the granite sidewalk pinching my skin against my joints and vertebrae.
Many of your peers that seem otherwise quite progressive and egalitarian have an obvious, persistent animus for “the bay area homeless population.” Residents of certain neighborhoods of SF have a conditioned apathy for human suffering that rivals that exhibited by ER nurses, and some take active pride in it.
You can also meet people whose sole intention is — quite openly — to enrich themselves at the expense of others by gaining exposure to some part (however distal) of the shovel and pick supply chain for this gold rush, which they breathlessly inform you will be the last gold rush ever to occur in human history; and by them be offered strange drugs with names you’ve never heard before, and asked if you’re making it out of the permanent underclass.
You can, conversely, notice the sheer relative concentration of competence and moral consistency at the top of the pyramid, very near to the compute itself, and how in this particular ecosystem, the most apparently productive organisms in the sunlit zone of the ocean are outnumbered several hundred to one by twilight zone dwelling mollusks and jellyfish and filter-feeders, and seabed scavengers; and idly wonder in two or five years which layer you’ll fall into. You can too easily develop a penchant for tortured analogies.
You might suffer painful reminders that — extremely inconveniently — you still have (some) genuine moral compunctions, carried gingerly from the 2000s to here, that you have no realistic way to either fully satisfy nor fully expunge. Wealthier people than you may tell you there’s no reason to want to be rid of your scruples, and more moral ones may tell you that acting on your values is only so difficult in the imagination. Both are probably right, unfortunately.
You can hand a few dollars to a roaming beggar on the train, since she’s with a baby, then idly wonder whether you’ve been scammed, then decide that any woman in a dire enough situation to end up begging on the train with a baby attached to her probably deserves the help anyway — then idly watch the BART police chase after her a few minutes later.
You might meditate on the natures of competence, gratitude, progress, disillusionment, capital, luck, noblesse oblige, and the perhaps unexpected relationships between them.2
SF is one of the most secular places I’ve ever been, a kind of special economic zone God declines to enter.
Hypergambling culture (see: prediction markets, retail trading of short-dated options, memecoins, the Stanford dropout to Y Combinator pipeline, etc.) has been synthesized with rather extreme forms of classism and cynicism to form a uniquely repulsive new economic religion, one foremost of nihilism — one embraced in its milder forms by much of the disillusioned “gen Z” cohort, many of whom seem to itch for an excuse to declare normative economic participation a lost cause and indulge their most extractive zero-sum aspirations. The source of this cynicism, even only along the financial axis, extends far beyond human displacement by increasingly capable of AI systems, to a much broader mood about increasingly efficient (in the EMH sense) and adversarial labor markets, the (arguably preventably) inflated costs of housing and healthcare, currency debasement, and the impending end of USD hegemony.
Of course, many proponents of this complex of beliefs show a marked failure of imagination: tacit in their scheming and rhetoric is the assumption that many parts of the status quo will be preserved indefinitely, even through unprecedented transformation of our species and society. To many others, and perhaps to me to some lesser extent, “will property rights survive the singularity?” probably sounds akin to ““.
Some have been so completely captured by Capital that, when you venture to question their barely-implicit assumption that the only end of human activity is to more efficiently allocate capital for the purpose of maximizing returns to capital, they react as if you have threatened their life. “e/acc” embraces this mindset fully and explicitly, but is essentially just a loose simulacrum of an actual political movement, formed by unimaginative people cribbing aesthetics (from cybernetics, etc.) that they do not understand, and is not worth further discussion.
Certainly not every professional who lives and works in the Bay Area subscribes to this religion, but nearly every committed clergy-member of it that I’ve met so far was at the very least socially or culturally enmeshed in the place. The last twenty years of tech in California are, I’m told, the modern incarnation of last century’s local gold rush economy; the favored term of art is “high-variance,” i.e. the acknowledgement that by (for example) founding or joining a startup, one sacrifices expected value and accepts a likely poor outcome in exchange for a realistic and otherwise inaccessible chance at a right-tail outcome (“generational wealth”). The stakes are, of course, not so serious as long as there’s a 350k TC tech job for the founder to fall back on (longtime Twitter addicts may recall a discourse in which the startup class tried to get away with referring to themselves as being “in the arena,” and perhaps overplayed their hand a bit, resulting in some ridicule).
There are things about the variance-seeking life that I admire, and my own life thus far has been unfathomably strange by normal standards. The irony is in how, within Bay Area startup culture (and elsewhere), it has been productized into a marketable aesthetic, and made toothless in the process. The aforementioned Stanford-to-YC pipeline is one of the more obvious examples of this legitimization3, and has shredded the prestige of both organizations in the process.
The weather is exquisite; the first time I got to Berkeley, I mused about how my uninformed sardonic posts about
In finance, there is a concept known as “volatility time” that refers to a scaling of some feature (e.g., a time series) by the cumulative volatility — volatility being a rolling measure of how “jumpy” or dispersed a price or other feature is, computed using the standard deviation of each window of log-returns. The intuition, AIUI, is that signals tend to carry much more information per unit time immediately before and immediately after major events, i.e., when asset prices are most volatile, and you therefore want to naturally upweight them when fitting forecasting models, for example.
In the Bay, perceptual time slows to a crawl . You can retreat to Berkeley or Oakland if you want it to speed up again, or NYC if you want to skip a few months.
I am writing this for multiple reasons — of course, to boast a little, and to indulge my itch to write something freeform and nontechnical, but mainly to assemble a consolidated and public record of the absurdity that I can point to when I want to impress upon someone the strangeness and realness of it all. That is: I get the sense that many people don’t believe me when I relate to them the actual, real epistemic status of the city, but . Another is that I plan to move back to the self-appointed center of the world within a few weeks, and want to indelibly record a sliver of my current impression of it before it all grows mundane to me.
My new acquaintance Celeste Land writes less obliquely about the mood:
The cars drive themselves, a seven-digit salary is considered the only way out of a nearly certain fate in the permanent underclass. Effective altruism is close to a norm. Billboards speak of pull requests, wage slaves go to sleep with their AI wives on heated mattresses that stop working when us-east-1 goes down.
Everyone “hates” it. No one wants to leave.
My experience has been overwhelmingly positive on a personal level, and I am better off for having met the people I have and spent the time I have with them, with vanishingly few exceptions. I must explicitly disclaim this because I am compelled to speak in such an oblique way that it might not be entirely clear.
It feels much like being back in university, an environment I sorely missed.
There is a refreshingness in entering a zone where the typical Overton window is several standard deviations closer to yours than to the general public’s, one in which ninety percent of a given conversation about the Situation by volume is no longer dedicated to unbearable microlitigation of sneers and derailments enabled by your casual invocation of “AGI,” or some unobjectionable-seeming mild assumption you made about X starting condition or Y modus ponens, or your interlocutors refusing to believe what is in front of their very eyes — to say nothing of a person like me enjoying other conversational, social, and geographic privileges that I’ve never before experienced in my life. It is simultaneously somewhat maddening to have, for casual conversation, a setpoint that lies squarely on, or at least an attractor state toward, the rest of everything that’s going to happen [to you] forever. Yudkowsky’s old note about “competent elites” crosses your mind frequently, but so do the various rationalist aphorisms about how there are only very rarely any “adults in the room” by default.
But they are still relatively normal; the Zizians are extreme outliers. There is a difficult tension between the stereotypically “proportionate” reaction to
And it has all been done to death, of course — the religious fervor in the air, the undue credulity in some areas, the unjustified skepticism in others,
It is not, however, opaque or inscrutable. It’s fairly trivial to present a reasonably faithful distillation of a set of beliefs that many incredibly smart and well-informed people, both near to and far from the actual development of frontier AI systems, genuinely hold — here’s my attempt:
Everyone alive today — or merely nearly everyone — might well literally4 die to misaligned ASI, or misaligned humans wielding ASI, within years (20, or 5, or 2, depending on who you ask). Human extinction is on the table, and might be more likely than not.
Recursive self-improvement is in principle possible, and will likely “just happen” in a sense once computer programming (or ML research, if you prefer) is “solved.” RSI can probably be bootstrapped from sufficiently good LLMs. Progress is exponential (or latently super-exponential, potentially). This (along with the following couple reasons) is why some of the below concerns are justified even though present-day AI systems are tripped up by certain kinds of tasks, highly limited in difficult-to-quantify ways, etc.
Capabilities are spiky/non-uniformly distributed/not necessarily predictive of each other (particularly not to the extent they tend to be in humans), and will remain spiky even as AI systems become superhuman over broad swathes of important intellectual tasks.
It is worth at least discussing the concerns listed here for precautionary reasons, as outlandish as they are and as unlikely as many of them may be, because the potential downside is so immense and there are no credible disproofs of their possibility at hand.
The value of human labor — first cognitive, then physical/dextrous — is rapidly going to zero, and nearly all returns to productivity generated by AI and most other economic endeavors will accrue to capital rather than labor, in a vicious cycle of accumulation. Total replacement at a cost well below the minimum living wage5 is inherently sui generis, and has no historical precedent whatsoever: no previous kind of automation left nowhere for displaced workers to retreat to, nor actively strategized on its own integration into existing means of production, including those used to develop increasingly powerful AI systems. This will cause massive social unrest by default. It also raises nontrivial questions about the teleological role of the consumer in the modern neoliberal-capitalist state mythology.
Total surveillance, bordering on omniscience, of the kind classically dreamt of by despotic regimes, will be trivially possible within a few years. This will extend well beyond mere expert-level analysis of every message, web search, credit card transaction, GPS datapoint, photograph, phone call, or surveillance tape (etc) within Google’s or the NSA’s reach, into things like “a team of experts in a datacenter tirelessly analysing every American’s precise psychological disposition 24/7 based on all the aforementioned information, forever,” and even worse developments that are best not mentioned.
Even if a proverbial black swan occurs, and global AI investment or technical progress is stalled, the technology works and its continued development and diffusion throughout society is an economic inevitability. If it’s waylaid in one place (e.g., the U.S.) and not others (China), one expects .
Advanced AI development is a “winner takes all” game for nations, and might be best modeled as an existential consideration of the same kind as nuclear armament (true believers might claim the situation is even worse: nuclear proliferation can be monitored and controlled through a fairly unobtrusive international surveillance regime, unlike strong AI, and an authoritarian state having nuclear weapons doesn’t confer tools for extremely fine-grained control over the domestic population, unlike strong AI). Among other things, this means that erratic and extreme behavior might be game-theoretically expected from otherwise rational governments as ASI grows nearer.
Those with the means — and if things go well, nearly everyone in the developed world — might literally be able to live arbitrarily long if/when ASI is used to develop radical life extension technologies (i.e., to allow currently living young people to “hit LEV” (longevity escape velocity)). Biological immortality is within reach for the first time in history.
[Some of] the leading AI companies have very bad public optics, and even those that present a positive vision for the future and uphold their prior commitments tend to be saddled with public perceptions generated by the others. This will be relevant to outcomes to the extent that we continue to live in a democratic society (even, say, a violently democratic one).
There is no trial run, and we may not get a warning shot before an AI-related catastrophe e.g., kills billions of people; alignment in particular must be “done right the first time,” even though it has very much proven to be an incrementally refineable empirical science under the current LLM regime.
S-risks are both possible and sometimes preferentially generated by “capital” in the most general form possible (i.e., raw optimization power); we have existence proofs in the form of factory farms and third world nations rife with sweatshops, but also e.g. wild animal suffering of biblical scale (for the former). If emulation of sentient minds is possible, then there will be an unambiguous choice for society (or whatever entity/structure ultimately has the authority here) to make between an intense global surveillance regime and allowing crimes of unprecedented scale in silico.
Similarly, there’s no rule saying that AI systems powerful enough to pose catastrophic risks necessarily won’t ever be runnable on consumer hardware — the choice may well be between the aforementioned unprecedented surveillance regime and allowing tens or hundreds of millions of people to have access to [the means to create] weapons of mass destruction. For my part, it seems quite possible that there could be an AGI “kernel” capable of generating (from say, a few tens or hundreds of GB of text/compressed data and in a reasonable amount of time) an AI system roughly on par with current frontier models on consumer hardware. Roughly order-of-magnitude per year improvements in cost per token at a given level of speed and quality have added credence to this intuition over the last 3-4 years.
Currently existing frontier AI models are at least as smart as the median human in most ways that matter, with respect to so-called keyboard-mouse-display tasks. They cannot, e.g., quickly pick up a new 3D video game and play it well in real time, but cases like this are increasingly due more to Moravec’s paradox and memory primitives not being tightly integrated with model training than “raw intelligence” or failure to generalize.
[etc.]
I could go on. I want to emphasize that these are not consensus among any particular group of people, certainly not when all taken together. Almost all of them are however reasonably close to being modal beliefs among the “kind of people Anna talks to [irl].”
For the record, I more or less believe most of these. They also all independently scare the shit out of me. These are often skirted around in day-to-day conversation, but one gradually gets the sense that this has more to do with boredom or weariness or tactfulness than any kind of intentional deceit or strategization. So pervasive is the air of autistic openness, at least in the higher-trust social environments.
In a way, I have already been stuck in the Bay Area for years. Even though actual inhabitants thereof constituted a minority of my close friends and acquaintances until fairly recently, the rest — in Massachusetts, NYC, the PNW, and elsewhere — have been increasingly culturally downstream of far-west discourses, thoughtforms, and world models for several years. It became impossible to ignore after 2022 or so, to our eternal chagrin; being even moderately intelligent, “plugged in,” and tolerant to the outlandish implies that you care to some extent about/pay some attention to the Project, and accordingly steep in the wretched culture surrounding it. Every group chat is tacitly permanentunderclasschat now.
My excursion made me more optimistic and tempered my mood — at the very least, it reduced the mean number of times per day that I post about the permanent underclass or the singularity or the ASI at the end of time, though this was already beginning to happen just due to sheer boredom of the topic. I remarked to friends at least twice that I wasn’t sure whether this calming had more to do with renewed personal optimism about my specific positioning, or mere contagion from spending so much time around highly amiable people with unflinchingly positive outlooks. I imagine that one needs both kinds to succeed.
Despite the general tone of openness and anti-affectation in the Bay, there are niches of immense performativeness (the latter is often an anti-affectation affectation, IME) . For example, I found some irony in how I and others have conducted [riskier,] more ambitious, and more successful medical self-experiments than the overwhelming majority of self-ID’d “peptide enjoyers,” whose grey market Chinese peptides tend to be somewhat mundane GLP-1s.
In this vein, I found it interesting to contemplate how aesthetics reflexively assemble themselves — for example, some of the more obvious tendencies and preoccupations of 21st-century technocapital were visible even in the 60s and 70s and found their way into cyberpunk literature, which then shaped the language we use to discuss and think about technology (and society around it), which constrained their development into a narrower set of outcomes, which then (in combination with people holding self-fulfilling preoccupations with cyberpunk themes) made the future cyberpunk.
It is perhaps, in some senses, the strangest and most fraught moment in history that one could have chosen to be recovering from severe health issues, and to be in the earlier months of gender transition, and trying to become net- economically and socially useful for the first time. On the other hand,
I love my em-dashes and will defend them to the death. No LLM was involved in the writing of this post.↩︎
The remainder of this post will be presented as a series of somewhat disjointed vignettes; I couldn’t find it in me to weave these into a more coherent narrative or opinion essay, and I feel it better reflects the contradictory, almost schizophrenic nature of the place.↩︎
There’s a common joke — which is only barely a joke — about how at this point this is a more reliable career path than participation in the FAANG tournament economy.↩︎
In this section, you may consider this a shorthand for “literally, actually, for real, in the normative sense of the term ‘die’|‘live forever’|etc”↩︎
test↩︎