Every few weeks, I invite an interesting thinker to curate Exponential View. It allows them to share their ideas with us in an intimate setting.
This week, Zavain Dar, a young, bright investor in deep technologies, has stepped up to the plate to introduce us to “New Radical Empiricism”, a form of enquiry enabled by data and machine learning that seems to be an alternate (and fruitful) tool to the scientific method.
Zavain has been a long-time supporter of our mission here, and I’m pleased to present his Exponential View.
🎹 P.S. EV readers Nadya Peek and Marko Ahtisaari (aka Construction) created a Spotify playlist to enrich your exponential experience every week. This rotation features Brazilian forro, Chicago house, English dream pop, and tracks from their recently released EP.
I’ve spent my academic and professional career exploring the edges and intersection of Artificial Intelligence and Philosophy. Spurred by frustration in Expert Systems design, notably General Game Playing, I’ve spent the majority of the last decade asking how we can use data and computers to remove bias and “an underlying assumption of structure” from various disciplines and domains. Succinctly described by terms I’ve co-opted, “Data Nihilism” and “Radical Empiricism”, which I’ve playfully employed across numerous courses I’vetaught, cross-domain investments I’ve made (eg. fintech and bio), and even a stint advising some of the sharpest minds in all of pro sports (Trust the process!).
Preamble to radicalness
Johannes Kepler famously discovered the laws of planetary motion (quick primer, here) in the early 17th century. To do so he likely gazed long and hard at the sky, developed hypotheses of underlying rules, and tested the accuracy of his predictions against the next night’s sky. Explicitly, he assumed:
- There existed a metaphysical series of “laws”, static and innate to the universe. His role was as a detective, to unearth them.
- His particular language of mathematics sat at the correct layer of abstraction and specificity to unearth, describe, and reduce these laws. (In retrospect, they didn’t!)
- That the laws were within an order of complexity graspable by human minds.
In other words, Kepler employed the scientific method, something every primary school kid is deeply familiar with. His belief in an underlying and accessible metaphysical structure was a philosophical stance - loosely described as “reductionism” - that drove him not only to fame in physics but also other “interesting” results - such that the universe’s elements could all be reduced to platonic, geometric, solids.
Early practitioners in the field of Artificial Intelligence were largely reductionists converting into the field from mathematicians and physicists. Just as Kepler retained a consistent philosophical underpinning as he traversed physics to, ummm, chemistry, so did these early AI researchers as they jumped from mathematics and physics into the computational study of intelligence. Their self-circumscribed work as researchers was to intuit the “shape” of intelligence. Intelligence was a structure that could be described with the language of code within an order of complexity that human programmers could explicitly write out.
Just as we today scoff at Kepler’s realism applied to chemistry, I’d imagine we’re not far from the same visceral reaction when considering the earliest instantiations of AI - dare I say, Expert Systems and Computational Logic. The hints are everywhere. Google’s Director of Research, Peter Norvig, now publicly calls these early practitioners “Mystics”. In essence, the creation of digitized data, availability of compute, and maturation of distributed computing paradigms hasn’t only ushered in an era of Machine Learning, it’s rocked the underpinnings of the Scientific Method.
The current rise of Machine Learning, notably Deep Learning, follows from relaxing the assumptions of what intelligence is. It’s no longer a static metaphysical entity, likely isn’t accessible given our naive abstractions of knowledge, and a human almost certainly can’t explain all the rules that define it. To practice AI today is to have evolved away from Reductionism and view the world void of structure, intimately self aware of complexity underlying all domains, and unburdened with the expectation of finding grok-able ground truth. Look at data, model data, rinse, repeat, and nothing else.
_Welcome to New, machine learning-enabled, Radical Empiricism (NRE). _
New Radical Empiricism in practice
Equipped with the lens of a dissipating Scientific Method, an out of favour Reductionism, and an increasingly popular Empiricism- we can see the NRE appear in almost every single domain, not just the research labs of AI. I’d go so far as to argue this subtle philosophical shift is one of the predominant trends that will define the first half of the 21st century. Below, some examples.
- Back to Kepler: A young Kepler today likely wouldn’t look into the sky in search for structure, but rather collect the prior month of visual sky data and train a recurrent neural network to predict tomorrow night’s sky. To an approximation, Google’s DeepMind is doing just that: Kepler’s Laws with Interaction Networks.
- Butterfly Effect: Chaos Theory, a field stymied by complexity and mathematics beyond what humans have shown (yet) to be capable of discovering and modeling sees record-breaking predictive modelling as University of Maryland researchers use Deep Nets, absent knowledge of any classical equations, to model the effects of Chaos with 8x higher accuracy than traditional best in class reductionist methods.
- Moneyball:First popularized by Michael Lewis’ Moneyball, which recounted the Oakland A’s use of data to rather than the flawed abstractions of scouts. It led to a series of record seasons. Radical empiricism now rears its head across every major sport. The 76ers, under former GM Sam Hinkie, stand center stage for an empiricist strategy so disruptive that the league literally had to adapt and change its rules in response. Here, NPR covers the philosophy behind “Trust the Process” and Hinkie’s 76ers.
- Macroeconomics is dead: Or at least, in the reductionist sense. Decades of failure from macroeconomists show 1) there is no static underlying structure or 2) it’s complexity rivals that of Chaos - likely beyond the reach of mechanistic human grasp. Ironically some of the largest successes in implementation of economic policy have occurred when reductive trained Economists have had the least input! As Feynman famously stated: "Imagine how much harder physics would be if electrons had feelings!" Economists are taking note: MIT’s rising star, Professor Andrew Lo argues we should stop borrowing philosophy from physics to model economics. A concept he jests as “Physics Envy”.
- Biology for dummies: Big pharma pipelines follow an exponentially expensive cost curve, dubbed Eroom’s Law (opposite of Moore’s Law). This is so bad that some analysts are questioning the long-term viability of the entire industry! We're riddled by failures in translating reductive human knowledge of molecular bio and chemistry from the research lab to the clinic. It’s fair to suspect that we’ve neither discovered the apropos abstractions for analysis nor, necessarily, that we need to understand them in the first place. Continuing to hammer an increasingly blunt nail, if a Tornado appears in Iowa should we work to set up local shelters or analyze Chaos theory to discover it was triggered by a butterfly in Belize? Pharma veterans are skeptical of AI’s effect in Drug Discovery after historical failures from computational approaches, most of which still used as substrate reductive models of underlying structures. But here too, times are changing! Janssen’s recent cell paper impressed even the most staunch critics. While Big Pharma slowly awakens to the reality of empiricism in the wet lab, two companies - Insitro and Recursion (where I am an investor) - announce ambitious plans, raising north of $80M each to bring Radical Empiricist approaches to Drug Discovery.
- Swinging too far: Google researcher Ali Rahimi who gained “geek fame” late last year after likening the current practice of Deep Learning to “alchemy” at NIPS, continues the ring the alarm. Has NRE gone too far?
Short morsels to appear rad at dinner parties
💯 Where was b0rk when I took systems classes in University?!
Does family income play a role in childhood development? Researchers at Columbia set off on a 3-year study connecting basic income to childhood development.
✊ Great overview of the neglected technologies that were supposed to keep the web open and free.
🐍 AI gets tricked by the Rotating Snake illusion.
Story of the last survivor of the last slave ship, Cudjo Lewis.
The world surveilled by Palantir, where everyone’s a suspect.
🐄 Adding less than 2% dried seaweed to a cow’s diet can reduce methane emissions by 99%.
Interactive: wisdom & madness of the crowds.
Azeem's end note
My rough summary is that the availability of data and machine learning might create new pathways of exploration in a range of domains, where are traditional techniques have become stuck. It's almost as if the availability of data (and better maths with which to manipulate it) allows us to create new dimensions from which to establish vantage points to surveil the problem set.
Incidentally, if you're in NYC on May 23 & 24, don't miss the LDV Vision Summit run by our reader Evan Nisselson. Enter "EV" when you register for 40% off.
See you all next week,
Sign in or become a Exponential View member to join the conversation.