If you’ve been an observer or participant in Twitter’s Bitcoin sphere, you most likely know of “PlanB,” who also goes by the moniker “100TrillionUSD.” PlanB is an institutional investor in Europe that moonlights as a Bitcoin analyst, focusing on quantitative and technical analysis.
He is best known for formulating the Bitcoin Stock to Flow (S2F) model. The econometric model suggests that there is a relationship between BTC’s level of scarcity and its price, and it predicts that in the coming two years, the price of the cryptocurrency will rise to a range of $55,000 to around $100,000.
That means that from current levels, Bitcoin could rally anywhere from five to ten times.
What’s important about the model is that it has a high R squared value, which is a measure in statistics terms predicting how closely real data points fit the model predicting those points.
While many have branded the S2F Model is pure “hopium,” it recently gained credence with a new scarcity analysis.
A model predicting $100,000 Bitcoin gains some credence
The premise of the model is that whenever Bitcoin experiences a halving, BTC’s fair value sees an exponential increase.
In isolation, the model may seem somewhat irrational: how is Bitcoin supposed to rise an order of magnitude (or two) every market cycle without something breaking?
As a pseudonymous analyst explained, the critics of the model forget the exponential nature of the adoption of technologies, whether that be the internet or Bitcoin.
1) Everyone is familiar with @100trillionUSD S2F model. Critics recoil because it seems to scale too rapidly. What they’re forgetting is an intangible amplifying force that I wanted to ballpark quantify: the normal distribution of technology adopters. pic.twitter.com/1Me6nACry0
Thus far, Bitcoin has a market penetration of around 10 million users, he postulated. (Estimates on this sum vary, but most place it in the 10-50 million range.) This is important because BTC’s total addressable market has approximately 2.2 billion individuals, referencing the estimated number of people in the world with more than $10,000 in assets.
That’s to say, about 0.5 percent of the people that may theoretically adopt Bitcoin in the future have.
The adoption curve of modern technologies is exponential and faster than ever. Social media can become adopted by billions in literally a few years. TikTok has almost a billion active users just three and a half years after its launch.
The cliche but accurate chart below depicts this exponential phenomenon well.
Although Bitcoin has been moving slower than, say, social media or smartphones, its growth curve has undoubtedly been exponential.
Assuming this continues, the number of Bitcoin users (which corresponds with demand for BTC) will exponentially increase while the number of BTC mined will exponentially decay. This will create a phenomenon where BTC becomes super scarce, with any net marginal increase in demand for Bitcoin pushing up prices in an outsized manner.
“The ‘adoption adjusted scarcity’ numbers suggest a level of scarcity much more dramatic than the raw Bitcoin mining numbers convey at face value,” the analyst wrote.
“The intent was simply to factor in how bell curve adoption […] shows a ~10x increase in ‘adjusted scarcity’ per reward era, much like we see in the S2F Model.”
There are still skeptics of the model
There are many skeptics of the model, despite what evidence there may be to back it up.
Alex Krüger, an economist closely tracking the cryptocurrency space, has stated that the model is inherently flawed because Bitcoin’s scarcity is algorithmic and known in advance, not random, meaning it can technically be priced in.
“People using S2F to predict BTC may as well be using the moon cycles to predict BTC. […] The S2F analysis is interesting. But the S2F model is useless for predicting price, as the underlying assumptions of the model are not met. Now and always.”
This was echoed by Hugo Nguyen, a crypto writer and a 2019 resident at Bitcoin development firm Chaincode Labs. He opined that the R squared the model has is derived from a “generous margin of error” and that the data on which the model is based is “pathetic.”