home | contact | blog | notes | random | recommendations | colophon | resume
Beyond the Judgement of God. Meltdown: planetary china-syndrome, dissolution of the biosphere into the technosphere, terminal speculative bubble crisis, ultravirus, and revolution stripped of all christian-socialist eschatology (down to its burn-core of crashed security). It is poised to eat your TV, infect your bank account, and hack xenodata from your mitochondria.
Machinic Synthesis. Deleuzoguattarian schizoanalysis comes from the future. It is already engaging with nonlinear nano-engineering runaway in 1972; differentiating molecular or neotropic machineries from molar or entropic aggregates of nonassembled particles; functional connectivity from antiproductive static.
[…]
Converging upon terrestrial meltdown singularity, phase-out culture accelerates through its digitech-heated adaptive landscape, passing through compression thresholds normed to an intensive logistic curve: 1500, 1756, 1884, 1948, 1980, 1996, 2004, 2008, 2010, 2011 …
If you want to peer into the future, you can book a flight to San Francisco International Airport today. You can work your way downtown and watch (mostly) self-driving cars meander, sometimes clumsily, around what initially seems like a small section of urban Boston cut-and-pasted into the most naturally beautiful place you’ve ever been in your life (in the midst of incongruous, endless five-over-one sprawl1), and perturbed vertically such that many of the streets and buildings reside on grades you didn’t realize were legally or mechanically viable to build on; you can ride in one and be totally astounded for 20 minutes, and then mostly bored. You can meet people you’ve followed for years on the site formerly known as Twitter, and whom you always had difficulty imagining as having a corporeal form. As they say, in SF, Twitter is real life.
You can see hundreds of peculiar SaaS and cloud compute advertisements2, few less than three layers of abstraction removed from any concrete application, and most betraying a level of unseriousness and psychological unsophistication and aesthetic immaturity that you think ought to be incompatible with controlling a couple million dollars of capital, let alone a couple billion. You can visit a place where an entire city seems to be implicitly under the Chatham House Rule, partly because everyone knows everyone else and partly because the sentiments some people will ~openly express are so appalling that it’s incumbent on you to protect the speakers’ reputations from themselves.
You can walk by the buildings, the mere normal office buildings, where the inhabitants might or might not be building God, or at least (much more credibly) an alien species made superficially in the image of man, and contemplate the lack of barbed wire and heavily armed guards. You can meet the individuals whose values, competencies, and luck are steering the development & deployment of transformative AI —3 i.e., the rest of everything that happens forever — and observe that they’re essentially normal people, or at least not of a fundamentally different taxon than you (perhaps to your relief, or perhaps not). You may even argue with them about the finer details of the Situation, and most will gladly hear you out and seemingly not take offense, and prove themselves to be decent in many other little ways.
You will readily notice the near-total stratification (which you had been warned of) of downtown San Francisco between ~software engineers and the service worker caste, and idly wonder which you’ll fall into in ten or twenty years; you can retreat to Berkeley, where “~software engineers” is replaced with “~wealthy students from abroad,” or one of the poorer and more heterogeneous suburban or exurban areas of the Bay.
coldhealing has a tweet that goes like this: “my vision of new york is five boroughs filled entirely with laptop job elite galavanting around the playground city served by an underclass that commutes in from tiny five-over-ones in hoboken”. This post may as well be about SF. The city’s downtown area is probably the most thoroughly powerwashed place I’ve ever been, even when including Washington D.C., Wall Street, and Boston.
Californians love their cars, and pedestrians and drivers alike are subservient to the automobile in a way that urban East Coast residents mostly are not (though this is in large part a function of distance & sprawl). You get the sense that the real technocapital demon reaching through time to ensure its own survival is the specter of Henry Ford haunting America, many decades later. Nothing human makes it out of the near future, but compact SUVs probably will.
This is to say nothing of the vagrants in various states of disrepair sunken into the corners of almost every block, nearly wherever you go. They seem more listless and passive than their counterparts in New York City did, and ensnared by the otherwise pristine facets and metal thorns of the edifice of 22nd-century(sic) capital, like it is eating them. Every time I see one I feel the granite sidewalk pinching my skin against my joints and vertebrae.
Many (but by no means all) of your peers that seem otherwise quite progressive and egalitarian have an obvious, persistent animus for the notorious “Bay Area homeless population.” Residents of certain neighborhoods of SF have a conditioned apathy for human suffering that rivals that exhibited by ER nurses, and some take active pride in it.
You will also meet people whose sole intention is — quite openly — to enrich themselves at the expense of others by gaining exposure to some part (however distal) of the shovel and pick supply chain for this gold rush, which they breathlessly inform you will be the last gold rush ever to occur in human history (at least, the last one in human history); and by them be offered strange drugs with names you’ve never heard before, and asked if you’re making it out of the “permanent underclass.”
You can, conversely, notice the sheer relative concentration of competence and moral consistency at the top of the pyramid, very near to the compute itself, and how in this particular ecosystem, the most apparently productive organisms in the sunlit zone of the ocean are outnumbered several hundred to one by twilight zone dwelling mollusks and jellyfish and filter-feeders, and seabed scavengers that operate in total darkness; and idly wonder in two or five years which layer you’ll fall into. You can too easily develop a penchant for tortured analogies.
You might suffer painful reminders that — extremely inconveniently — you still have (some) genuine moral compunctions, carried gingerly from your childhood in the 2000s to here, that you have no realistic way to either fully satisfy nor fully expunge. Wealthier people than you may tell you there’s no reason to want to be rid of your scruples, and more moral ones may tell you that acting on your values is only so difficult in the imagination. Both are probably right, unfortunately.
You might hand a few dollars to a roaming beggar on the train, since she’s with a baby, then idly wonder whether you’ve been scammed, then decide that any woman in a dire enough situation to end up begging on the train with a baby attached to her probably deserves the help anyway — then idly watch the BART police chase after her a few minutes later.
You might meditate on the natures of competence, gratitude, progress, disillusionment, capital, luck, noblesse oblige, and the perhaps unexpected relationships between them.4
SF is one of the most classically secular5 places I’ve ever been, a kind of special economic zone God declines to enter.
Hypergambling culture (see: prediction markets, retail trading of short-dated options, memecoins, provocative Substacks, the Stanford dropout to Y Combinator pipeline, etc.) has been synthesized with rather extreme forms of classism6 and cynicism to form a uniquely repulsive new economic religion, one foremost of nihilism — one embraced in its milder forms by much of the disillusioned “gen Z” cohort, many of whom seem to itch for an excuse to declare normative economic participation a lost cause and indulge their most extractive zero-sum aspirations. It utterly dominates the software world. The source of this cynicism, even only along the financial axis, extends far beyond human displacement by increasingly capable AI systems, to a much broader mood about increasingly efficient (in the EMH sense) and adversarial labor markets7, the (arguably preventably) inflated costs of housing and healthcare, currency debasement, and the impending end of USD hegemony. Unhelpfully, the modern Twittersphere is essentially engineered for maximally efficient manufacture of status anxiety and the attendant lifestyle inflation among mid-20s gen Z professionals.
Of course, many proponents of this complex of beliefs show a marked failure of imagination: tacit in their scheming and rhetoric is the assumption that many parts of the status quo will be preserved indefinitely, even through unprecedented transformation of our species and society. To many others, and perhaps to me to some lesser extent, “will property rights survive the singularity?” probably sounds akin to an uplifted spider asking a human if classical music is somehow instrumentally useful for insect capture8.
Some have been so completely captured by Capital that, when you venture to question their barely-implicit assumption that the only end of human activity is to more efficiently allocate capital for the purpose of maximizing returns to capital, they react as if you have threatened their life. Trying to trace out the motivations for this line of thinking in search of something constructive or human9 often reveals nothing but an ouroboros of sophistry. “e/acc” embraces this mindset fully and explicitly, but is essentially just a loose simulacrum of an actual political movement, formed by unimaginative people cribbing aesthetics (from cybernetics, etc.) that they do not understand, and is not worth further discussion.
Certainly not every professional who lives and works in the Bay Area subscribes to this religion, but nearly every committed clergy-member of it that I’ve observed so far is at the very least socially or culturally enmeshed in the place. The last twenty years of tech in California are, I’m told, the modern incarnation of last century’s local gold rush economy; the favored term of art is “high-variance,” i.e. the acknowledgement that by (for example) founding or joining a startup, one sacrifices expected value and accepts a likely poor outcome in exchange for a realistic and otherwise inaccessible chance at a right-tail outcome (“generational wealth”). The stakes are, of course, not so serious as long as there’s a 350k TC tech job for the founder to fall back on10 (longtime Twitter addicts may recall a discourse in which the startup class tried to get away with referring to themselves as being “in the arena,” and perhaps overplayed their hand a bit, resulting in some ridicule).
There are things about the variance-seeking life that I admire, and my own life thus far has been unfathomably strange by normal standards. The irony is in how, within Bay Area startup culture (and elsewhere), it has been productized into a marketable aesthetic, and made toothless in the process. The aforementioned Stanford-CS-to-YC pipeline is one of the more obvious examples of this legitimization11, and has shredded the prestige of both organizations in the process.
The weather12 is truly exquisite, and you can go out most days without a sweatshirt or any particular attention to what kinds of clothes you wear. In the past, I’ve made uninformed sardonic posts about how rationalists, EAs, AI safety people, etc. post like that because they spend most of their time in exceedingly beautiful, high-trust built environments surrounded by immense natural beauty and an Edenic climate. The first time I got to Berkeley, I mused about how these assumptions were actually even truer than I thought. I don’t really hold it against them; I’m slowly coming to realize there’s little to no nobility in suffering, and if you have aspirations to do serious work with the intent of helping others, it’s much better to have a mindset not dominated by scarcity and zero-sumness.
I am writing this for multiple reasons, e.g.to indulge my itch to write something freeform and nontechnical — but mainly to assemble a consolidated and public record of the absurdity that I can point to when I want to impress upon someone the strangeness and realness of it all. That is: I get the sense that many people don’t believe me when I relate to them the actual, real epistemic status of the city, but maybe if I lay it all (well, most) out in one place in as clear and lucid a way as I can manage, it will make sense. Another is that I plan to move back to the self-appointed center of the world within a few weeks, and want to indelibly record a sliver of my current impression of it before it all grows mundane to me.
My new acquaintance Celeste writes[link] less obliquely about the mood13:
The cars drive themselves, a seven-digit salary is considered the only way out of a nearly certain fate in the permanent underclass. Effective altruism is close to a norm. Billboards speak of pull requests, wage slaves go to sleep with their AI wives on heated mattresses that stop working when us-east-1 goes down.
Everyone “hates” it. No one wants to leave.
I recently spent five weeks there, staying mostly in Berkeley but frequently venturing into the city (typically via the BART, which I found incredibly clean and safe-feeling compared to e.g. the MTA in NYC or the MBTA in Boston, but at the expense of noise and worse coverage). I rode in the Waymos and ate excellent food and took long walks14 from downtown Berkeley to Oakland and drank Soylent and met at least three or four dozen different people. I visited Salesforce Park and the Ferry Building and saw the Golden Gate Bridge from afar and did many other things of this sort.
My experience has been overwhelmingly positive on a personal level, and I am better off for having met the people I have and spent the time I have with them, with vanishingly few exceptions. I love my friends. I must explicitly disclaim this because I am sometimes compelled to speak in such an oblique way that it might not be entirely clear, and because speaking about any of this at all in a public setting is fraught.
Nevertheless, and despite the absurdity, it socially and spatially and somatically feels much like being back in university, an environment I sorely missed. The proximity (sprawl permitting), spontaneity, and sheer density of interesting people is thrilling.
In a way, I have already been stuck in the Bay Area for years. Even though actual inhabitants thereof constituted a minority of my close friends and acquaintances until fairly recently, the rest — in Massachusetts, NYC, the PNW, and elsewhere — have been increasingly culturally downstream of far-west discourses, thoughtforms, and world models for quite a while. It became impossible to ignore after 2022 or so, to our eternal chagrin; being even moderately “plugged in,” intelligent, and tolerant to the outlandish basically implies that you care to some extent about/pay some attention to the Project, and accordingly steep in the sometimes-wretched culture surrounding it. Every group chat is permanentunderclasschat now.
My excursion made me more optimistic and tempered my mood — at the very least, it reduced the amount that I post about the permanent underclass, or LEV, the singularity, or the ASI singleton at the end of time, though this was already beginning to happen just due to sheer exasperation with the topic (both mine and my interlocutors’ (sorry)). I remarked to friends that I wasn’t sure whether this calming had more to do with renewed personal optimism about my specific positioning, or mere contagion from spending so much time around highly amiable people with unflinchingly positive outlooks. I imagine that both kinds are important in their own ways.
In finance, there is a concept known as “volatility time” that refers to a time-scaling of some feature (e.g., a time series) by the cumulative volatility — volatility being a rolling measure of how “jumpy” or dispersed a price or other feature is, computed using the standard deviation of each window of log-returns. That is, the basic unit of time becomes the integral of variance rather than wall time. The intuition, AIUI, is that signals tend to carry much more information per unit wall time immediately before and immediately after major events, right after market open and right before close, etc, i.e., when asset prices are most volatile, and you therefore want to naturally upweight them when fitting forecasting models, for example.
In the Bay, perceptual time slows to a crawl, and the higher the density of “short timelines people” around you, the more pronounced the effect is. There is a distinct weightiness to individual weeks and months conferred by perceived exponential progress, the impending eschaton, the sheer busyness of everyone at ground zero, et cetera15. It sometimes feels like it might slow to a stop at the precise moment of impact. You can retreat to the East Coast if you want it to speed up again.
There is something refreshing in entering a zone where the typical Overton window is several standard deviations closer to yours than to the general public’s, one in which ninety percent of a given conversation about the Situation by volume is no longer dedicated to unbearable microlitigation of sneers and derailments invited by your casual invocation of “AGI,” or some unobjectionable-seeming mild assumption you made about X starting condition or Y modus ponens, or your interlocutors refusing to believe what is in front of their very eyes16 — to say nothing of a person like me enjoying other conversational, social, and geographic privileges that I’ve never before experienced in my life.
It is simultaneously somewhat maddening to have, for casual conversation, a discursive setpoint that lies squarely on, or at least a discursive attractor state toward, the rest of everything that’s going to happen [to you] forever. Earnesty about the extent to which your life (and many others) are in the hands of a small group of highly competent and willful people is in permanent tension with social caution17 18.
Yudkowsky’s old note about “competent elites” crosses your mind frequently, but so do the various TPOT aphorisms about how there are only very rarely any “adults in the room” by default.
And it has all been done to death, of course — the religious fervor in the air, the undue credulity in some areas, the unjustified skepticism in others,
It is not, however, opaque or inscrutable. It’s fairly trivial to present a reasonably faithful distillation of a set of beliefs that many incredibly smart and well-informed people19, both near to and far from the actual development of frontier AI systems, genuinely hold — here’s my attempt:
Everyone alive today — or merely nearly everyone — might well literally20 die to misaligned ASI, or misaligned humans wielding ASI, within years (20, or 5, or 2, depending on who you ask). Human extinction is on the table, and might be more likely than not. Permanent totalitarian oppression of every living creature forever by a malign singleton is also quite possible.
Recursive self-improvement is in principle possible, and will likely “just happen” in a sense once computer programming (or ML research, if you prefer) is “solved.” RSI can probably be bootstrapped from sufficiently good LLMs. Progress is exponential (or latently super-exponential, potentially). This (along with the following couple reasons) is why some of the below concerns are justified even though present-day AI systems are tripped up by certain kinds of tasks, highly limited in difficult-to-quantify ways, bottlenecked by the physical world, etc.
Capabilities are spiky/non-uniformly distributed/not necessarily predictive of each other (particularly not to the extent they tend to be in humans), and will remain spiky even as AI systems become superhuman over broad swathes of important intellectual tasks.
It is worth at least discussing the concerns listed here for precautionary reasons, as outlandish as they are and as unlikely as many of them may be, because the potential downside is so immense and there are no credible disproofs of their possibility at hand.
The value of human labor — first cognitive, then physical/dextrous — is rapidly going to zero, and nearly all returns to productivity generated by AI and most other economic endeavors will accrue to capital owners rather than labor, in a vicious cycle of accumulation. Total replacement at a cost well below the minimum living wage21 is inherently sui generis, and has no historical precedent whatsoever: no previous kind of automation left ~nowhere for displaced workers to retreat to, nor actively strategized on its own integration into existing means of production, including (in this case) those used to develop increasingly powerful AI systems. This will cause massive social unrest by default. It also raises nontrivial questions about the teleological role of the consumer in the modern neoliberal-capitalist state mythology.
Total surveillance, bordering on omniscience, of the kind classically dreamt of by despotic regimes, will be trivially possible within a few years. This will extend well beyond mere expert-level analysis of every message, web search, credit card transaction, GPS datapoint, photograph, phone call, or surveillance tape (etc.) within Google’s or the NSA’s reach, into things like “a team of (AI) experts in a datacenter tirelessly analysing every American’s precise psychological disposition 24/7 based on all the aforementioned information, forever,” and even worse developments that are best not mentioned.
Even if a proverbial black swan occurs, and global AI investment or technical progress is stalled, the technology works and its continued development and diffusion throughout society is an economic inevitability. If it’s waylaid in one place (e.g., the U.S.) and not others (China), one expects the other(s) to continue rapidly advancing their own capabilities and use these to gain a potentially-permanent economic/political/military advantage. There is a corollary about the coordination problems involved in voluntary AI pauses/slowdowns.
Advanced AI development is a “winner takes all” game for nations, and might be best modeled as an existential consideration of the same kind as nuclear armament (true believers might claim the situation is even worse: nuclear proliferation can be monitored and controlled through a fairly unobtrusive international surveillance regime, unlike strong AI, and an authoritarian state having nuclear weapons doesn’t confer tools for extremely fine-grained control over the domestic population, unlike strong AI). Among other things, this means that erratic and extreme behavior might be game-theoretically expected from otherwise rational governments as ASI grows nearer (see, e.g., U.S.-China relations WRT Taiwan).
Those with the means — and if things go well, nearly everyone in the developed world — might literally be able to live arbitrarily long if/when ASI is used to develop radical life extension technologies (i.e., to allow currently living young people to “hit LEV” (longevity escape velocity)). Biological immortality is within reach for the first time in history.
[Some of] the leading AI companies have very bad public optics, and even those that present a positive vision for the future and reliably uphold their prior commitments tend to be saddled with public perceptions generated by the others. This will be relevant to outcomes to the extent that we continue to live in a democratic society (even, say, a violently democratic one).
There is no trial run, and we may not get a warning shot before an AI-related catastrophe e.g., kills billions of people; alignment in particular must be “done right the first time,” even though it has very much proven to be an incrementally refineable empirical science under the current LLM regime.
S-risks are both possible and sometimes preferentially generated by “capital” in the most general form (i.e., raw optimization power); we have existence proofs in the form of factory farms and third world nations rife with sweatshops, but also e.g. wild animal suffering of biblical scale (for the former). If sentience-preserving emulation of sentient minds is possible, then there will be an unambiguous choice for society (or whatever entity/structure ultimately has the authority here) to make between an intense global surveillance regime and allowing crimes of unprecedented scale in silico.
Similarly, there’s no rule saying that AI systems powerful enough to pose catastrophic risks necessarily won’t ever be runnable on consumer hardware — the choice may well be between the aforementioned unprecedented surveillance regime and allowing tens or hundreds of millions of people to have access to [the means to create] weapons of mass destruction. For my part, it seems quite possible that there could be an AGI “kernel” capable of generating (from say, a few tens or hundreds of GB of text/compressed data and in a reasonable amount of time) an AI system roughly on par with current frontier models on consumer hardware. Roughly order-of-magnitude per year improvements in cost per token at a given level of speed and quality have added credence to this intuition over the last 3-4 years.
Currently existing frontier AI models are at least as smart as the median human in most ways that matter, with respect to so-called keyboard-mouse-display tasks. They cannot, e.g., quickly pick up a new 3D video game and play it well in real time, but cases like this are increasingly due more to Moravec’s paradox and memory primitives not being tightly integrated with model training than “raw intelligence” or failure to generalize.
Evaluating LLM sentience/consciousness is very, very thorny and we have no way of knowing with any certainty whether or not existing models experience meaningful qualia.
Interpretability research is double-edged and may accelerate capabilities research.
[etc.]
I could go on. I want to emphasize that these are not consensus among any particular group of people, certainly not when all taken together. I have no MNPI to share, and these don’t represent any kind of official position except as far as they overlap with public messaging. Almost all of them are, however, reasonably close to being modal beliefs among the “kind of people Anna talks to [irl].”
For the record, I more or less believe most of these. They also all independently scare the shit out of me. Concrete discussion of these is often skirted around in day-to-day conversation, but one gradually gets the sense that this has more to do with boredom or weariness or tactfulness (or, at the risk of being overbold, because they’re taken for granted/seen as obvious) than any kind of intentional deceit or strategization … so pervasive is the air of autistic openness, at least in the higher-trust social environments.
Despite the general tone of openness and anti-affectation in the Bay, there are niches of immense performativeness (the latter is often an anti-affectation affectation, IME) . For example, I found some irony in how I and others have conducted [riskier,] more ambitious, and more successful medical self-experiments than the overwhelming majority of self-ID’d “peptide enjoyers,” whose grey market Chinese peptides tend to be somewhat mundane GLP-1s.
In this vein, I found it interesting to contemplate how aesthetics reflexively assemble themselves — for example, some of the more obvious tendencies and preoccupations of 21st-century technocapital were visible even in the 60s and 70s and found their way into cyberpunk literature, which then shaped the language we use to discuss and think about technology (and society around it), which constrained their development into a narrower set of outcomes, which then (in combination with people holding self-fulfilling preoccupations with cyberpunk themes) made the future cyberpunk.
It is perhaps, in some senses, the strangest and most fraught moment in history that one could have chosen to be recovering from severe health issues, and to be in the earlier months of gender transition, and trying to become net- economically and socially useful for the first time. On the other hand,
There is much more I could write, and this post might be updated lightly in the future if I wake up in a cold sweat, recalling something important I forgot to say. This will do for now though.
Arguably by design; this doesn’t really matter though.↩︎
Favorite copy samples include “Stop hiring humans”, “Most AI companies avoid saying they’ll automate jobs. We don’t.”, and “are you BI-curious? ;)” (BI == business intelligence)↩︎
I love my em-dashes and will defend them to the death. No LLM was directly involved in the writing of this post.↩︎
The remainder of this post will be presented as a series of somewhat disjointed vignettes; I couldn’t find it in me to weave these into a more coherent narrative or opinion essay, and I feel it better reflects the contradictory, almost schizophrenic nature of the place.↩︎
Particular to Bay Area classism, as opposed to e.g. NYC finance-sphere classism, seems to be a lack of noblesse oblige or an interest in maintaining stable and cordial relations with other social classes; I model this mostly as a strategic error or a failure of theory of mind. I don’t have strong feelings about this point because I mostly avoid the kind of people that raise the question in the first place.↩︎
Beset on one side by LLM-generated CVs and automated applications at immense scale, and on the other by inhuman hiring screening systems that no doubt weight formal credentials even more heavily than before. I can’t remember the last time one of my friends or acquaintances was hired without a referral or other circumvention of the publicly available hiring interface.↩︎
No pun intended↩︎
From a recent annapost: “anyways, the thing i feel most strongly compelled to push back on is the rising sentiment that the primary thing that makes a nation is its ability to produce nominal return to capital, and that being served by an underclass of fungible imported labor units and afforded other such material comforts by pricing power is the best our elite class can/should aspire to”↩︎
There’s a notable exception: people on work visas who will be deported if their venture fails and they can’t promptly find another job. For obvious reasons, it is also very much the question whether the relative supply of these kinds of jobs will be greater, lesser, or about the same in a few years.↩︎
There’s a common joke — which is only barely a joke — about how at this point this is a more reliable career path than participation in the FAANG tournament economy.↩︎
“Temperate seasonless Mediterranean climates. This is what cities mean to me.”↩︎
I realized quite late into writing this that I’m doing basically the exact kind of blog post she has been doing for weeks. This is however quite a personal kind of writing and I think my perspective is distinct enough to be welcome/useful.↩︎
My phone’s tracking indicates that I walked an average of over 4 miles per day during my time there, which is a lot for me.↩︎
and for me in particular, the high frequency of ‘unusual interactions that give me pause’↩︎
Quite possibly because they’re scared out of their minds, like everyone else.↩︎
As is, like, acknowledging this tension, potentially, but people generally seem to be forgiving of it.↩︎
Though I have found that this is another thing that exists substantially in the mind, especially when around good/reasonable people.↩︎
Real people that I have personally verified to exist in the physical world and not be agitprop bots!↩︎
In this section, you may consider this a shorthand for “literally, actually, for real, in the normative sense of the term ‘die’|‘live forever’|etc”↩︎
I haven’t seen a convincing deflation-based counterargument here that acknowledges the cost disease afflicting e.g., housing and healthcare, so I’m not bothering to acknowledge this point inline. If you find one, please send it my way, I’m always interested in hearing whitepills.↩︎