The Futurist

"We know what we are, but we know not what we may become"

- William Shakespeare

ATOM Award of the Month, January 2021

It is time for another ATOM AotM.  This one, in particular, goes retro and ties into concepts from the early days of The Futurist, back in 2006.  This award also dispels the misinformed myth that 'we can no longer send a man to the Moon like we did in 1969-73' or, even worse, that 'human progress peaked in 1969'.  If anything, progress in space-related advancement has been steadily on an exponential trendline.  

Rocket-launch-costs-trendThe first man-made object to be placed into orbit and send back information from orbit, was Sputnik in 1957.  Since that time, satellites have risen in number and complexity.  But the primary cost of a satellite is not the hardware itself, but rather the cost of getting it to orbit in the first place.  While the early data is sparse and the trendline was not easy to discern, we are now in an inflection point of this trajectory, enabling a variety of entities far smaller than governments to launch objects into orbit.  If the trendline of a 10x reduction per decade is in fact manifested, then a number of entirely new industries will emerge in short order.

The emergence of private enterprises that can create profitable businesses from space is an aspect of the 21st century that is entirely different from the capital-intensive government space programs of the second half of the 20th century.  From geospatial data to satellite-derived high-speed Internet, the era of commercial space is here. 

SpaceX has already begun the Starlink program, which advertises 1 Gbps Internet access for rural customers.  It is not yet apparent how SpaceX will upgrade the hardware of its satellites over time, but if the 1 Gbps speed is a reality, this will break the cartel of existing land-based ISPs (such as Comcast), where the gross margin they earn on existing customers is as high as 97%.  Needless to say, high-speed access available to the backwaters of the world will boost their economic productivity.  

Other efficiencies are on the horizon.  3D Printing in space is very pragmatic, as only the 3D Printing filament has to be replenished from Earth, and finished objects are simply printed in orbit.  As the filament never has an awkward shape, it is far less expensive to send unprinted filament into an orbiting 3D printer.  Asteroid mining is another, and is an extension of the fundamental ATOM principle of technology always increasing the supply of, or alternatives to, any commodity.  The prices of precious metals on Earth could collapse when asteroid mining reaches fruition, to a much greater extent than oil prices plunged from hydraulic fracturing.  

MMNPBut the falling cost of launch per unit weight is only half of the story.  To see the second exponential, we go all the way back to an article from April 22, 2006, titled 'Milli, Micro, Nano, Pico'.  The point here is that the ability to engineer smaller and smaller (integrated circuits with 5 nm transistors), at greater and greater scale, comprise a double exponential of technological intricacy and integration.  Surely, this has to result in a modernization of the electronics sent up into space.  

Consider that the major unmanned spacecraft that NASA has launched, such as the Pioneer, Voyager, and Cassini probes.  These were electronics from the 1970s, with designs that are not being updated to this day given that the New Horizons probe (launched in 2006) was still the same size.  We know that an electronics design, from 1975 to 2020, is expected to shrink in both size and cost by a factor of over 1 million.  If a supercomputer the size of an entire room in 1975 is less powerful than a 200-gram Raspberry Pi system in 2021, then why is NASA still launching one-ton devices that have incorporated none of the advances in electronics that have happened in the last 45 years?  The camera and transmitter on Voyager 2 are surely far less powerful than what exists in 2021 smartphones.  

Given the continued shrinkage in electronics and decline in launch costs, it is long past time for thousands of Voyager-type probes, each the size of a smartphone, to be launched in all directions.  Every significant body in the Solar System should have a probe around it taking pictures and other readings, and the number of images available on the Internet should be hundreds of times greater than exists now.  This will happen once someone with the appropriate capabilities notices how far behind the electronics of NASA and other space agencies are.  

Hence, this ATOM AotM makes use of up to three exponential trends at once.  But the decline in launch costs per unit weight alone has immense implications.  

This will be the final ATOM AotM posted on this website as an article.  Future instances will be on my new YouTube channel, which I hope to inaugurate in February.  

 

Related ATOM Chapters :

3.  Technological Disruption is Pervasive and Deepening

12. The ATOM's Effect on the Final Frontier

 

 

January 10, 2021 in Accelerating Change, ATOM AotM, Space Exploration, The ATOM | Permalink | Comments (15)

Tweet This! |

More ATOM Proof Piles Up

We are very near to being able to declare absolute victory on the ATOM thesis.

Remember that March 15, 2020 really was the 'Netscape Moment' in Economics.  The US Fed Funds rate, which was the only major rate in the world that was foolishly high at that point, went from 1.5% down to 0% (permanently), and trillions in new monetary creation commenced.  As of August, the four major central banks are at +35.3% on a year-over-year basis (source : Yardeni).  YardeniBalanceSheet

Meanwhile, the US 10-yr Treasury Note languishes at 0.7% yield, the weighted average yield of all high-grade 10-yr Sovereign Bonds worldwide is at approximately 0.00%, and oil remains below $40/barrel even now, while the tech-laden Nasdaq 100 continues to make new all-time highs.  What more proof is required, that monetary creation a) does not cause inflation up to a pretty high annual rate of creation, and b) this creation finds its way into technology, to produce more technology?  

Now, we get the benefit of probing were the ceiling of the monetary creation gradient might be.  I have maintained in the ATOM publication that 16-24% was the optimal rate of increase (based on my own proprietary research about the depth of technological density and acceleration), with a lower number resulting in insufficient inflation and the higher number causing brief inflation.  Now, we happen to see a 35.3% net YoY increase.  This is well above the band I specified above, but it also follows a period of slack, which means the CAGR over the last several years is still well below the 24%/yr upper bound.  

CPIIf the current YoY increase is in fact an overshoot above the optimal zone, there will be a very brief blip in the CPI.  This will cause the disgruntled inflation hawks and PhD Economists to emerge from the woodwork to point out how 'the entire ATOM thesis is wrong'.  They will be suitably embarrassed yet again, since the blip will be very brief once the trendline of 16-24% catches up.  As we can see from the second chart, the CPI is just not having it.  

GSCINor is the Goldman Sachs Commodity Index, which represents worldwide prices of all commodities (oil, gold, natural gas, silver, coffee, etc.).  It is down a whopping 60% from its 2010 levels, despite all the QE.  Even this index does not represent the accurate scale of commodity deflation, since I contend that computational power, storage, and bandwidth should all be commodities in this index (as volatility already is, despite not being a physical form).  Inclusion of these components would reveal a faster as well as more accurate deflationary picture.  This trend can only continue and accelerate through the 2020s and beyond.  

Also note how large the base of cumulative monetary action now is.  As we see from the chart, the YoY dollar amount is $7 Trillion, and this is just for the four largest central banks (which amount to 85% of all monetary creation).  Just to stay at 16% YoY growth for the next 365 days, another $4.3 Trillion has to be done.  

As I said in June :

I said elsewhere that the decade of the 2010s had $23 Trillion of cumulative QE worldwide.  The PhD Economists of the world, who have predicted 100 of the last zero bouts of hyperinflation, still believe QE is an aberration and assume that the cumulative QE will be reversed (i.e. that the 2020s will have -$23 Trillion of cumulative QE).  I claim the opposite, which is that under both ATOM principles and the Accelerating Rate of Change, the 2020s will see about $100 Trillion of QE, and that this will move towards sending cash directly to people (rather than the esoteric bond-buying that comprises of QE today, which inevitably concentrates the benefit of this monetary creation in very few hands).  

Does anyone doubt that the 2020s will in fact see $100 Trillion of QE?  The first eight months of 2020 are certainly on track for that trend.  That means it is also on track for a greater diffusion of future monetary creation.  The current channels are super-inefficient, super-saturated, and frankly, one could scarcely devise a better way for all new monetary creation to go just to the wealthiest tech billionaires while average people get nothing.  

Furthermore, while bad governance can destroy anything (and this sort of new safety net actually increases the level of bad governance, as the penalties are delayed), the fact that the central banks of the world reacted so quickly means that a number negative economic phenomena might very well be in the past.  For example :

i) There may never be a traditional recession again, based on the technical definition of a recession, which is two consecutive quarters of negative 'Real' GDP.

ii) There may never again be a stock market correction so severe that the S&P 500 remains over 10% below its all-time high for a full calendar year.  

iii) The S&P 500 may never again go more than three years without making an all-time high.  Remember that dividends (about 1.7%/yr) also exist.  

Points ii) and iii) above prove that the equity index, rather than gold, is the true safe haven.  The gradient of progress in the modern era is just too steep for the multi-year recessions of the past to happen anymore barring the worst governance.  The divergence between the performance of gold vs. that of the Nasdaq 100 over the last decade is extreme.  

The proof is piling up.  The Economics PhD ivory tower cannot continue their denial forever, as they already are in the dustbin of history.  Yes, most recent articles here have been very similar, but remember that we are in the midst of a seminal historical turning point that almost no others have caught on to yet.

M1M2Update : For those worried about Money Supply, note that M1 has increased 42% YoY, and M2 about 24%.  This is at a level where even I thought there could be inflation, since M1 is the most liquid and rapidly-circulated pool of money.  Such inflation could happen, but has not happened yet.  

If big increases in even M1 have not caused inflation (still TBD), then the case for ATOM-DUES is even stronger, as one of the last few unknowns has been exposed as a non-event.  

 

Related ATOM Chapters :

2 : The Exponential Trendline of Economic Growth

4 : The Overlooked Economics of Technology

 

 

 

 

 

September 17, 2020 in Accelerating Change, Economics, Stock Market, The ATOM | Permalink | Comments (44)

Tweet This! |

ATOM Award of the Month, June 2020

The pandemic has ratified and accelerated a whole host of ATOM principles, so I have to update parts of the entire publication.  Suffice it to say, a number of pent-up ATOM predictions just got fast forwarded, with a seminal day in the history of economics having been forced into manifestation.  We can divide the events into two parts : technological and monetary.  

Among technological disruptions, there are three that qualify as having been overdue for a long time, that got tipped over by this catalyst :

1) Video Conferencing : This was something that Cisco expected to take off 14 long years ago, but expensive proprietary hardware and the inertia of old habits prevented it from attaining the critical mass necessary for entrenchment.  Cisco lost at least $6 Billion on this endeavor.  Now, however, as people are forced to work from home, a critical mass of users have to adapt to this usage, which in turn attracts more innovation and capital to the technology.  While none of the companies advancing videoconferencing in 2009 are the same ones as the ones winning now, this is common in the technology sector (recall the search engine wars).  The cascade of disruptions I listed in 2009 still apply.  Among other things, if cubicle-style workplaces can agree that all on-premise meetings are restricted to three days a week (M-W, or Tu-Th, or whatever), then the distance that an employee can commute effectively doubles, and the housing availability for them thus increases 4X.  The current status quo of certain real estate being vastly more expensive than equivalent real estate 30 miles further from the jobs cluster may finally correct.  This is a form of standard-of-living increase that is poorly captured in GDP statistics.  

2) Educational Institutions : The extraordinarily distorted cost/value equation of both higher and lower educational institutions (which should not be conflated with the concept of 'education') already crossed the point of no return in 2015.  But, as with videoconferencing, too few people were willing to be 'Spartacus' and make use of alternative solutions that were in fact lower risk.  This applies to both students and employers, for employers declaring that they will hire based on on-site testing and online certifications, rather than degrees that bear little to predictive value of employee performance, is the catalyst that would have induced more students to bypass the universities-as-gatekeeper oligopoly.  The fact that universities want to charge the same tuition for online classes (and are being sued by students disputing this), when comparable online classes are available for orders of magnitude lower prices, is going to reduce US university enrollment permanently.  To cope, there is no reason that US universities cannot be forced to return to a 1980s-era cost structure.  

Retail Square Footage3) Retail Real Estate Re-Purposement : Overlooked among the technological and economic effects of this black swan is the fact that the 'retail apocalypse' and shift to e-commerce has fast-forwarded to such an extent that millions of acres of US retail land (including parking lots) will never return to previous levels of usage.  This was partially mentioned in the ATOM AotM for August 2017, and is often brought up in comments.  E-commerce was still just 12% of all retail sales before the pandemic, but if that 12% were to shift to 15%, or effectively jump two years ahead of the previous trend, that alone is a vast acceleration with visible results for the suburban landscape.  

In fact, when you combine the permanently lower demand for premium office space from the greater usage of videoconferencing, and the mass closure of retail real estate (at least in the US, where six times as much land is allocated to retail relative to most advanced countries), the correction and pressure to re-allocate could be extreme.  In places like California, the extreme restrictions on new residential construction will be exposed even more visibly as office space joins retail in a permanent glut.  

But the bigger event was not even these technological accelerants.  Instead, the complete and supreme validation of all ATOM conclusions was manifested fully.  Recall that the Federal Reserve was actually reversing QE and increasing interest rates in 2019.  It had begun to pause and correct that misguided reversal process, but still at too timid of a rate of net increase to even keep up with the ATOM trendline of monetary creation required to halt technological deflation.  

However, this crisis forced the Federal Reserve to do the right thing, even if they still don't understand the new economics of technology.  March 15, 2020, is a day that can fairly be described as the 'Netscape Moment in Economics'.  For those who recall the original 'Netscape Moment', on August 9, 1995, the Internet browser company Netscape did an IPO that exceeded its anticipated price by a huge margin, and triggered a boom in Internet company formation for the next 4.5 years.  Even after the bust, the economy was permanently into the Internet age.  Similarly, 3/15/2020 is the day where the Federal Reserve, in one fell swoop, lowered the Fed Funds rate to 0% (where it should have been all along), and signaled permanent QE.  In the following 10 weeks, over $3 Trillion of new QE was done, and the entire trajectory is starting to look more like the exponential parabolas that we are accustomed to seeing wherever the accelerating rate of change and exponential technology emerge.  As of May 31, here are two charts to depict the total QE effect (source : Yardeni) :

YardeniBalanceSheetThe first chart indicates the cumulative rise in the sum of the four major central banks.  Note the feeble attempt to reduce the balance sheets in 2018-19, only the forced to return to the trendine.  The second chart is the YoY percentage increase.  I have always said that the ATOM requires 16-24%/yr as an annual rate of increase to offset deflation and maintain optimal (2-3%) inflation.  The increase is now probing the upper limit of even my range, and it will be interesting to see if inflation emerges even then, or if the ceiling is even higher than I estimated (meaning that technological progress is now even faster and broader than before, and monetary creation could be higher than before).  

I said elsewhere that the decade of the 2010s had $23 Trillion of cumulative QE worldwide.  The PhD Economists of the world, who have predicted 100 of the last zero bouts of hyperinflation, still believe QE is an aberration and assume that the cumulative QE will be reversed (i.e. that the 2020s will have -$23 Trillion of cumulative QE).  I claim the opposite, which is that under both ATOM principles and the Accelerating Rate of Change, the 2020s will see about $100 Trillion of QE, and that this will move towards sending cash directly to people (rather than the esoteric bond-buying that comprises of QE today, which inevitably concentrates the benefit of this monetary creation in very few hands).  

Mark my words.  The entire profession of economics, full of PhDs who have never had any contact with entrepreneurs and real-time economic decisions, will be wrong by an epic margin.  

 

 

June 01, 2020 in Accelerating Change, ATOM AotM, Economics, Technology, The ATOM | Permalink | Comments (60)

Tweet This! |

ATOM Award of the Month, November 2019

It is time for another ATOM AotM.  This month's award has a major overlap with the November 2017 award, where we identified that telescopic power has been computerized, and as a result was rising at 26%/yr.  This itself was a finding from a much older article from all the way back in September 2006, where I first identified that telescopic power was improving at that rate.  

image from www.sciencenews.orgBut how do better telescopes improve your life?  Learning about exoplanets and better images of stars are fun, but have no immediate relevance to our individual daily challenges.  If you are not interested in astronomy, why should you care?  Well, there is one area where this advancement has already improved millions and possibly billions of lives : we have now mapped nearly all of the Near Earth Objects (NEOs) that might be large enough to cause a major disaster if any of them strike the Earth.  Remember that this is an object with a mass that may be billions of tons, traveling at about 30 km/sec (image from sciencenews.org), of which there are many thousands that have already orbited the sun over 4 billion times each.  

image from upload.wikimedia.orgAll of us recall how, in the 1990s, there were a number of films portraying how such a disaster might manifest.  Well, in the 1990s, we had little awareness of which objects were nearby at what time, and so there really was a risk that a large asteroid could hit us with little or no warning.  However, as telescopes improved, 26%/yr (the square root of Moore's Law, since pixel numbers increase as a square of linear dimension shrinkage) got to work on this problem.  Now, as of today, all asteroids larger than 1km are mapped, and almost all of the thousands that are larger than 140m (the size above which it would actually hit the surface, rather than burn up in the atmosphere) are mapped as well (chart from Wikipedia).  We have identified which object might be an impact risk in what year.  In case you are wondering, there is a 370m asteroid that will get very near (but not hit the Earth) in 2036.  Of course, by 2036, we will have mapped everything with far more precision, at this rate of improvement.  In other words, don't worry about an asteroid impact in the near future, as none of significance are anticipated in the next 17 years, and probably not for much longer than that.  Comets are a different matter, as we have not mapped most of them (and cannot, as of yet), but large ones impact too infrequently to worry about.  

Hence, the risk of an impact event, and mitigation thereof, is no longer a technological problem.  It is merely a political one.  Will the governments of the world work to divert asteroids before one hits, or will they only react after one hits in order to prevent the next impact?  These questions are complicated, as this problem is completely borderless.  Why should the United States pay the majority of the expense for a borderless problem, particularly one that has a 71% chance of hitting an ocean?  At any rate, this is another problem that went from deadly to merely one of fiscal prioritization, on account of ATOM progress.  

More interestingly, within this problem is another major business opportunity that we have discussed here in the past.  Asteroid mining is a potential industry that is simultaneous with asteroid diversion, as asteroid pulverization may waste some precious metals that can be captured.  Many asteroids have a much greater proportion of precious metals than the Earth's surface does, since 'precious' metals are heavy and most of the quantity sunk to the center of the Earth while the Earth was forming, while an asteroid with much lower gravity has its precious metals more evenly distributed throughout its structure.  There are already asteroids identified that have hundreds of tons of gold and platinum in them.  Accessing these asteroids will, of course, crush the prices of these metals as traded on Earth (another ATOM effect we have seen elsewhere in other commodities), and may reduce gold to an industrial metal that is used in much the way copper is.  This, of course, may enable new applications that are not cost-effective at the current prices of gold, platinum, palladium, etc.  But that is a topic for another time.  

 

Related :

ATOM AotM, November 2017

SETI and the Singularity

Telescope Power - Yet Another Accelerating Technology

 

November 01, 2019 in Accelerating Change, ATOM AotM, Space Exploration, The ATOM | Permalink | Comments (53)

Tweet This! |

Timing the Singularity, v2.0

Exactly 10 years ago, I wrote an article presenting my own proprietary method for estimating the timeframe of the Technological Singularity. Since that time, the article has been cited widely as one of the important contributions to the field, and a primary source of rebuttal to those who think the event will be far sooner.  What was, and still is, a challenge is that the mainstream continues to scoff at the very concept, whereas the most famous proponent of this concept persists with a prediction that will prove to be too soon, which will inevitably court blowback when his prediction does not come to pass.  Now, the elapsed 10-year period represents 18-20% of the timeline since the publication of the original article, albeit only ~3% of the total technological progress expected within the period, on account of the accelerating rate of change.  Now that we are considerably nearer to the predicted date, perhaps we can narrow the range of estimation somewhat, and provide other attributes of precision.  

In order to see if I have to update my prediction, let us go through updates on each of the four methodologies one by one, of which mine is the final entry of the four.  

1) Ray Kurzweil, the most famous evangelist for this concept, has estimated the Technological Singularity for 2045, and, as far as I know, is sticking with this date.  Refer to the original article for reasons why this appeared incorrect in 2009, and what his biases leading to a selection of this date may be.  As of 2019, it is increasingly obvious that 2045 is far too soon of a prediction date for a Technological Singularity (which is distinct from the 'pre-singularity' period I will define later).  In reality, by 2045, while many aspects of technology and society will be vastly more advanced than today, there will still be several aspects that remain relatively unchanged and underwhelming to technology enthusiasts.  Mr. Kurzweil is currently writing a new book, so we shall see if he changes the date or introduces other details around his prediction.  

2) John Smart's prediction of 2060 ± 20 years from 2003 is consistent with mine.  John is a brilliant, conscientious person and is less prone to let biases creep into his predictions than almost any other futurist.  Hence, his 2003 assessment appears to be standing the test of time.  See his 2003 publication here for details.  

3) The 2063 date in the 1996 film Star Trek : First Contact portrays a form of technological singularity triggered from the effect that first contact with a benign, more advanced extraterrestrial civilization had on changing the direction of human society within the canon of the Star Trek franchise.  For some reason, they chose 2063 rather than a date earlier or later, answering what was the biggest open question in the Star Trek timeline up to that point.  This franchise, incidentally, does have a good track record of predictions for events 20-60 years after a particular Star Trek film or television episode is released.  Interestingly, there has been exactly zero evidence of extraterrestrial intelligence in the last 10 years despite an 11x increase in the number of confirmed exoplanets.  This happens to be consistent with my separate prediction on that topic and its relation to the Technological Singularity.  

4) My own methodology, which also gave rise to the entire 'ATOM' set of ideas, is due for an evaluation and update.  Refer back to the concept of the 'prediction wall', and how in the 1860s the horizon limit of visible trends was a century away, whereas in 2009 it was in perhaps 2040, or 31 years away.  This 'wall' is the strongest evidence of accelerating change, and in 2019, it appears that the prediction wall has not moved 10 years further out in the elapsed interval.  It is still no further than 2045, or just 26 years away.  So in the last 10 years, the prediction wall has shrunk from 31 years to 26 years, or approximately 16%.  As we get to 2045 itself, the prediction wall at that time might be just 10 years, and by 2050, perhaps just 5 years.  As the definition of a Technological Singularity is when the prediction wall is almost zero, this provides another metric through which to arrive at a range of dates.  These are estimations, but the prediction wall's distance has never risen or stayed the same.  The period during which the prediction wall is under 10 years, particularly when Artificial Intelligence has an increasing role in prediction, might be termed as the 'pre-Singularity', which many people will mistake for the actual Technological Singularity.  

SingularityThrough my old article, The Impact of Computing, which was the precursor of the entire ATOM set of ideas, we can estimate the progress made since original publication.  In 2009, I estimated that exponentially advancing (and deflation-causing) technologies were about 1.5% of World GDP, allowing for a range between 1% and 2%.  10 years later, I estimate that number to be somewhere between 2% and 3.5%.  If we allow a newly updated range of 2.0-3.5% in the same table, and an estimate of the net growth of this diffusion in relation to the growth of the entire economy (Nominal GDP) as the same range between 6% and 8% (the revenue growth of the technology sector above NGDP), we get an updated table of when 50% of the World economy comprises of technologies advancing at Moore's Law-type rates.  

We once again see these parameters deliver a series of years, with the median values arriving at around the same dates as aforementioned estimates.  Taking all of these points in combination, we can predict the timing of the Singularity.  I hereby predict that the Technological Singularity will occur in :

 

2062 ± 8 years

 

This is a much tighter range than we had estimated in the original article 10 years ago, even as the median value is almost exactly the same.  We have effectively narrowed the previous 25-year window to just 16 years.  It is also apparent that by Mr. Kurzweil's 2045 date, only 14-17% of World GDP will be infused with exponential technologies, which is nothing close to a true Technological Singularity.     

So now we know the 'when' of the Singularity.  We just don't know what happens immediately after it, nor can anyone with any certainty. 

 

Related :

Timing the Singularity, v1.0

The Impact of Computing

Are You Acceleration Aware?

Pre-Singularity Abundance Milestones

SETI and the Singularity

 

Related ATOM Chapters :

2 : The Exponential Trendline of Economic Growth

3 : Technological Disruption is Pervasive and Deepening

4 : The Overlooked Economics of Technology

 

 

August 20, 2019 in Accelerating Change, Artificial Intelligence, Computing, Core Articles, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (66)

Tweet This! |

The Federal Reserve Continues to Get it Wrong

image from 3.bp.blogspot.com
The most recent employment report revealed 279,000 new jobs (including revisions to prior months), and an unemployment rate of just 3.6%, which is a 50-year low.  Lest anyone think that this month was an anomaly, the last 12 months have registered about 2.6M new jobs (click to enlarge).  
 
Over the last two years, the Federal Reserve, still using economic paradigms from decades ago, assumed that when unemployment goes below 5.0%, inflation would emerge.  With this expectation, they proceeded on two economy-damaging measures : raising the FF rate and Quantitative Tightening (i.e. reversal of Quantitative Easing, to the tune of $50B/month). 
 
As the Fed raised the Fed Funds rate all the way up from the appropriate 0% to the far-too-high 2.5%, the yield on the 10-year note is still 2.1%, resulting in a negative yield curve.  Similarly, inflation continues to remain muted, even after $23 Trillion and counting of worldwide QE, as I have often pointed out.
 
Yet, the Federal Reserve STILL wanted to raise interest rates, in direct violation of their own supposed principles regarding both the yield curve and existing inflation.  They were exposed as looking at only one indicator : the unemployment rate.  Their actions reveal that they think that a low unemployment rate presages inflation, and no other indicator matters.  
 
Now, for the big question : Why do they think any UE rate under 5.0% leads to inflation, and why are they getting it so wrong now? 
 
The answer is because back in the 1950-80 period, too many people having jobs led to excess demand for materially heavy items (cars, houses, etc.).  In those days, there was far too little deflationary technology to affect traditional statistics.  
 
Today, people still buy these things, but a certain portion of their consumption (say, 2%) comprises of software.  Software consumes vastly less physical matter to deploy and operate, and never 'runs out of supply', particularly now in the download/streaming era. If Netflix had 10 million new people sign up tomorrow, the cost of servicing them would be very little, and the time spent to sign up all of the new customers would also be negligible.  This is not hard to understand at all, except for those who 'know so much that isn't so'.  The Federal Reserve has over 600 PhDs, but if they all just cling to the same outdated models and look at just ONE indicator, having 600 PhDs is no better than having one PhD (and, in this case, worse than having zero PhDs).  
 
But alas, the Federal Reserve, (and by extension, most PhD macroeconomists) just cannot adjust to this 21st-century economic reality, even if they cannot explain the lack of inflation, and are incurious about why this is.  They are afflicted with a level of 'egghead' groupthink the likes of which exceeds what exists in any other major field today.  When this happens, we are often on the brink of a major historical turning point.  Analogous situations in the past were when the majority of mechanical engineers in the 1880s insisted that heavier-than-air flying machines large enough to carry even a single human were not possible, and when pre-Copernican astronomers believed the Sun revolved around the Earth.  
 
The percentage of the total economy that is converging into high-tech (and hence high-deflation) technologies is rising, and is now up to 2.5-3.0% of total world GDP.  This disconnect can only widen.
 
President Trump, seeing what is obvious here, has not just pressured the Federal Reserve to stop raising rates (which they were about to do in late 2018, which would have created the inverted yield curve that they supposedly consider to be troubling), but has recently said that the Fed should lower the Fed Funds rate by 1%, effectively saying that their last four rate hikes were ill-considered.  He rightfully flipped the script on them.  
 
Now, normally I would be the first to say a head of state should not pressure a central bank in any way, but in this particular case, the President is correct, and the ivory-tower is wrong.  The correct outcome through the wrong channel is not ideal, but the alternative is a needless recession that damages the financial well-being of hundreds of millions of people, and destroys millions of jobs.  He is right to push back on this, and anyone who cares about jobs must hope he can halt and reverse their damage-causing trajectory.  
 
In this vein, I urge everyone who is on board with the ATOM concepts, and who wishes to avoid an entirely needless recession, to send polite emails to the Federal Reserve Board of Governors, with a request that they look at the ATOM publication and correct their outdated grasp of monetary effects from liquidity programs, and the necessity of modernizing the field of macroeconomics for the technological age.  The website via which to contact them is here :
 
https://www.federalreserve.gov/aboutthefed/contact-us-topics.htm
 
image from futurist.typepad.comWe are at a crucial juncture in the history of macroeconomics, the economics of technology, and the entire concept of jobs and employment.  It is a matter of time before a Presidential candidate stands before a cheering audience and points out how trillions of QE were done, but none of the people in the audience got a single dime.  Imagine such a candidate simply firing up the audience with queries of "Did you get a QE check?  Did you get a QE check?  ?Usted recibiste un QE cheque?"  That could be a political meme that gains very rapid momentum.  
 
This is how a version of UBI will eventually happen.  We, of course, call it something better : DUES (Direct Universal Exponential Stipend).  
 
The question is, when least expected, such a leader will emerge (probably not in the US), to transition us to this era of new economic realities.  It will certainly be someone from the tech industry (the greatest concentration of people who 'get it' regarding what I have just elaborated above).  Who will be that leader?  A major juncture of history is on the horizon.  All roads lead to the ATOM.  
 
Related ATOM Chapters :
 
4. The Overlooked Economics of Technology
 
6. Current Government Policy Will Soon be Ineffective
 
10. Implementation of the ATOM Age for Nations
 
 
 

May 18, 2019 in Accelerating Change, Economics, Technology, The ATOM | Permalink | Comments (28)

Tweet This! |

ATOM Award of the Month, February 2019

ProductivityFor this month's ATOM AotM, we examine something that even the rest of the technology industry has virtually no awareness of, and the US public is entirely oblivious of, even though we have a President from the construction industry.  

The US construction industry has had no net productivity gain in the last 70 years.  Even worse, it declined by 50% over the last 50 years.  Construction should be seen as a type of manufacturing, as most construction is not devoted to anything highly customized or unusually complex.  Yet, manufacturing itself has risen in productivity by 800% over the same period that construction has not risen at all.   A combination of organized crime, government graft, and an anti-productivity ethos have contributed to this epic failure.  

Given that construction is about 7% of the US economy, this is troubling.  Imagine if that 7% was 16x more productive (i.e. merely keeping up with manufacturing).  Americans, particularly urban Americans, don't realize that they could have thrice the square footage for the same price if this sector merely kept up with manufacturing.  There would also be several hundred thousand more jobs in construction, and much broader home ownership.    

Construction ProductivityMeanwhile, outside of the many biases of the Western media, there is an amazing example of supreme construction efficiency.  China has grown at 7% a year over the last 20 years, even as the US has shrunk at -1% a year (click image to enlarge).  The productivity of China has greatly enlarged the size of its construction sector, to the extent that it is 20% of China's economy vs. just 7% in the US.  While the two countries are a different stages of growth and China is still at a much lower absolute level, the differential is still immense.  

Normally, in any industry, such an immense productivity differential leads to the productive country exporting products to the less productive country, swiftly driving local unproductive businesses to their deserved demise.  Construction, however, produces a product that is not transportable, so a productivity normalization has not happened.  At least not yet.  But this high of a differential eventually finds a way to engineer a normalization.  Modular construction is one method where parts are manufactured, and then assembled on site.  China could start exporting this to the rest of the world.  

Here is a Spire Research report on the advances in China's construction technology.

The Western media, in its hubris, is quite willing to criticize China for building entire cities 10 years before they are needed.  How often have we seen stories about empty cities in China that take a few years to fill up?  By contrast, the United States (and California in particular) does something much worse, which is to build structures 20 years after they are needed.  Given the choice between these two schedule misalignments, China's approach is vastly preferable.  

Retail Square FootageBeyond this, the costs of US ineptitude are about to become more problematic.  The eCommerce revolution is exposing the massive misallocation of land toward retail space, that is a uniquely American distortion.  Part of this is due to a peculiar depreciation schedule in the tax code originating in 1954.  The abundant land in the US interior led to the same lopsided usage of land in California, leading to the grotesque situation we have today where ultra-expensive housing resides next to vast, empty parking lots.  High California housing prices are the product of extreme artificial supply restriction, aided by low construction productivity that ensures an apartment complex takes three years to complete, where the same in China takes under one year.    

Dramatic photos of dead malls can easily evoke emotions in the average American, who has been trained to think this sort of retail experience is normal.  But charts that reveal the unique extent of US profligacy with regards to retail land reveals a much more logical sequence of impending events.  As eCommerce continues to shutter brick and mortar retail, there will be a rising groundswell of pressure to repurpose this land for a more contemporary use.  Unfortunately, the inadequate level of US construction productivity threatens to greatly delay this conversion, severely damaging our national competitiveness relative to China.  

Retail ApocalypseOn the subject of where the US may see China catch up, most of the focus is on Artificial Intelligence, Quantum Computing, and other high-concept technologies.  Yet the construction productivity differential alone represents the single biggest sectoral deficit from the point of view of the US and many other countries.  China is well-positioned to dominate the entire construction industry worldwide once it can more easily win international contracts and transplant its productivity practices abroad.  If the US blocks Chinese construction imports, other countries across the world will happily partake in these high-quality end products.  This should be welcomed by anyone with a free-market bent. 

For this reason, China's construction sector, in breaking the low-productivity pattern seen in almost the entire rest of the world, is the recipient of the February 2019 ATOM AotM.  

 

 

February 01, 2019 in Accelerating Change, ATOM AotM, China, Economics, The ATOM | Permalink | Comments (68)

Tweet This! |

Economic Trendline Reversion Does Not Happen Evenly

If we could point to one aspect that makes the modern era different from centuries past, the premier candidate for that distinction is how the centuries-established exponential, accelerating trend of technological progress manifests in economics, and the fact that the trendline is now in a steep upward trajectory.  These are all worldwide metrics, and have to be.  But if one examines the components, the variance contained therein is immense.

GDP by CountryOne table that I use relatively often is the one that depicts relative GDP gain by country, and have in the past used it to describe how the 2008-09 crisis led to the rebound happening elsewhere.  Google has just updated its economic data engine for 2017, enabling a full decade to be included from the start of the prior crisis.   This enables us to see what happens when the global economy experiences a major dislocation.  The Great Depression (1929-39) was one such dislocaton, and while the trendline is too steep today for a downturn of similar duration to manifest in the global economy, the more recent dislocation was almost as dramatic in terms of how it reoriented the tectonic plates of the global economy.  

From the table, we see that the World Economy grew by 40% in Nominal GDP.  We do not adjust for inflation in these metrics for reasons detailed in the ATOM publication, and we take the US$ metric as universal.  

The US, remarkably, did not grow at a much slower rate than the world average, and hence has not yet experienced a substantial proportional shrinkage.  By contrast, the rest of the advanced world has scarcely grown at all, while European economies have outright shrunk.  An advanced country, of course, does not have the same set of factors to contend with as an emerging economy that is at a stage where high growth is easier, hence this is really two tables in one.  India's underperformance relative to China is just as substandard as the UK's underperformance relative to the US.  

China has effectively dominated the entire world's growth.  China has grown at an astounding 245%, partly due to a structural strengthening of its currency, which itself is partly due to their more advanced understanding of technological deflation and the monetization of such through their central bank (as per the ATOM concepts).  India has not experienced any such strengthening of its currency (quite the opposite, in fact), which is why India's economy has grown at a far slower rate despite starting from a very low base.  

image from www.visualcapitalist.comConsider this other chart, of GDP distribution by country (as per the current borders) from the year 1 until 2017.  The growth of China (and to a lesser extent, India) appears to be a reversion to a status quo that existed from the dawn of civilization all the way until the early 19th century.  If this factor is combined with the exponential trend of world growth, then China's current outperformance seems less like an aberration.  

This begs the question of what the next decade will look like.  There is almost no chance that China can outperform the RoW by the same magnitude from this point onwards, simply due to the RoW no longer being large enough to manage the same intake of Chinese exports relative to China's size as before.  But will the convergence take the form of China slowing down or the RoW speeding up?  Will India experience the same convergence to pre-19th century proportional size, or is India a lost cause?

Under the ATOM program, it could certainly be the latter, since the advanced economies already have enough technological deflation that they can monetize it through central bank monetary creation.  China, by contrast, will not be technologically dense enough for it until 2024 or so.  The US could rise to 5-6%/year Real GDP growth by 2025.  

The current mindset in the Economics profession is vastly outdated, and there is little to no curiosity about accelerating economic growth rates, or about the relationship between technological deflation and central bank monetary action.  If China can no longer be an outlet to accommodate the entirety of the trendline reversion force that is seeking to work around these obstructions, then explosive growth combined with chaotic disruption will happen somewhere else.  

 

Related ATOM Chapters :

2.  The Exponential Trendline of Economic Growth

 

 

July 10, 2018 in Accelerating Change, China, Economics, India, The ATOM | Permalink | Comments (106)

Tweet This! |

ATOM Award of the Month, May 2018

For the May 2018 ATOM AotM, we will visit a technology that is not a distinct product or company, but rather is a feature of consumer commerce that we would now find impossible to live without.  This humble yet indispensable characteristic of multiple websites has saved an incalculable amount of frustration and productivity loss.  I am, of course, referring to web-based reviews.  

Five-star-reviewLest you think this is a relatively minor technology to award an ATOM AotM to, think again, for a core principle of technological progress is that a technology is most successful when it is barely even noticed despite a ubiquitous presence.  

Part of what has enabled eCommerce to siphon away an ever-rising portion of brick and mortar retail's revenue is the presence of reviews on sites like Amazon.  Beyond eCommerce, sites like Yelp have greatly increased the information access of consumers seeking to patronize a low-tech business, while media sites permit a consumer to quickly decide which films and video games are worthwhile without risking a blind purchase.  While false reviews were a feature of the early Internet for over a decade, now there is considerable ability to filter those out.  

I recall a frustrating episode that a friend and I experienced in 1999.  We wanted to rent a film from Blockbuster videos, but did not know which one.  We found one that had familiar actors, but the movie was extremely subpar, resulting in a sunk cost of the rental fee, transportation costs, and time spent on the film and two-way transit.  When returning to Blockbuster to discharge the VHS Cassette of the film, we selected another, based on the same criteria.  It was even worse.  We had rented two separate films over two separate round trips to Blockbuster, only to be extremely unsatisfied.  Movie review sites like IMDB did exist at the time, but my friend did not have home Internet access (as his Internet activities were restricted to his workplace, as was common at the time).  

Now, in this anecdote, just list the number of ATOM disruptions that have transpired since :

  1. There is no longer a 'Blockbuster Video' that rents VHS Cassettes, as films are rented online or available through a Netfilx subscription.
  2. Everyone has home Internet access, and can see a film's reviews before ever leaving home.

Hence, it is no longer possible to waste hours of time and several dollars on a bad film.  The same goes for restaurants, and in this case, both the consumer and the business are shielded from an information mismatch on the part of the consumer.  I have always felt that it was unfair for a patron to judge a restaurant negatively if they themselves did not order what they might have liked.  Now, with Yelp, in addition to reviews, there are pictures, enabling a vastly more informed decision.  

Even higher-stakes decisions, such as the selection of a dentist or auto-mechanic, has slashed the uncertainty that people lived under just 12 years ago.  The better vendors attract more business, while substandard (or worse - unethical) vendors have been exposed to the light of day.  This is a more powerful form of quality control than has ever existed before.

Now, to see where the real ATOM effects are found, consider the value of the data being aggregated.  This drives better product design and better marketing.  This also expands the roadmaps of accessory products or complementary products.  The data itself begins to fuel artificial intelligence, for remember that any pile of data of sufficient size tends to attract artificial intelligence to it.  This leads to a lot of valuable analytics and automation.  

If one were to rank the primary successful Internet use cases to date, the ability to see reviews of products and services would rank very high on the list.  For this reason, this receives the May 2018 ATOM AotM.  

 

 

 

May 29, 2018 in Accelerating Change, ATOM AotM, The ATOM | Permalink | Comments (24)

Tweet This! |

ATOM Award of the Month, January 2018

With the new year, we have a new ATOM AotM.  This is an award for a trend that ought to be easy for anyone to recognize who is at all familiar with Moore's Law-type concepts, yet is greatly overlooked despite quite literally being in front of people's faces for hours a day.  

The most crude and uninformed arguments against accelerating technological progress are either of a "Word processing is no better than in 1993, so Moore's Law no longer matters" or "People can't eat computers, so the progress in their efficiency is useless" nature.  However, the improvements in semiconductor and similar technologies endlessly finds ways into previously low-tech products, which is the most inherent ATOM principle.  

1960s TVThe concept of television has altered cultures across the world more than almost any other technology.  The range of secondary and tertiary economies created around it are vast.  The 1960 set pictured here, for $795, cost 26% of US annual per capita GDP at the time.  The equivalent price today would be $15,000.  Content was received over the air and this was often subject to poor reception.  The weight and volume of the device relative to the area of the screen was high, and the floorspace consumed was substantial.  There were three network channels in the US (while most other countries had no broadcasts at all).  There was no remote control.  

There were slow, incremental improvements in resolution and screen-size-to-unit-weight ratios from the 1960s until around 2003, when one of the first thin television sets was available at the retail level.  It featured a 42" screen, was only 4 inches thick, and cost $8000.  Such a wall-mountable display, despite the high price, was a substantial improvement above the cathode ray tube sets of the time, most of which were too large and heavy to be moved by one person, and consumed a substantial amount of floor space.

image from assets.hardwarezone.comBut in true ATOM exemplification, this minimally-improving technology suddenly got pulled into rapid, exponential improvement (part of how deflationary technology increased from 0.5% of World GDP in 1999 to 1% in 2008 to 2% in 2017).  Once the flat screen TV was on the market, plasma and LCD displays eventually gave way to LED displays, which are a form of semiconductor and improve at Moore's Law rates. 

Today, even 60-inch sets, a size considered to be extravagant in 2005, are very inexpensive.  image from infographic.statista.comLike any other old electronic device, slightly out of date sets are available on Craigslist in abundance (contributing to the Upgrade Paradox).  A functional used set that cost $8000 in 2003 can hardly be sold at all in 2018; the owner is lucky if someone is willing to come and take it for free.    

Since once ATOM-speed improvements assimilate a technology, the improvements never stop, and sets of the near future may be thin enough to be flexible, along with resolutions of 4K, 8K, and beyond.  Sets larger than 240" (20 feet) are similarly declining in price and visible in increasing numbers in commercial use (i.e. Times Square everywhere).  This is hence one of the most visible examples of ATOM disruption, and how cities of today have altered their appearance relative to the recent past.  

This is a large ATOM disruption, as there are still 225 Million new sets sold each year, amounting to $105 Billion/year in sales.  

 

Related :

The Impact of Computing

 

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening.

 

January 21, 2018 in Accelerating Change, ATOM AotM, Computing, Technology, The ATOM | Permalink | Comments (53)

Tweet This! |

ATOM-Oriented Class at Stanford

I have been selected to teach a class at Stanford Continuing Studies, titled 'The New Economics of Technological Disruption'.  For Bay Area residents, it would be great to see you there.  There are no assignments or exams for those who are not seeking a letter grade, and by Stanford standards, the price ($525 for an 8-week class) is quite a bargain.  

35 44 students have already signed up.  See the course description, dates, and more.  

 

 

January 07, 2018 in Accelerating Change, Economics, Technology, The ATOM | Permalink | Comments (0)

Tweet This! |

ATOM Award of the Month, November 2017

image from upload.wikimedia.orgFor this month, the ATOM AotM goes outward.  Much like the September ATOM AotM, this is another dimension of imaging.  But this time, we focus on the final frontier.  Few have noticed that the rate of improvement of astronomical discovery is now on an ATOM-worthy trajectory, such that this merited an entire chapter in the ATOM publication.  

Here at The Futurist, we have been examining telescopic progress for over a decade.  In September of 2006, I estimated that telescope power was rising at a compounding rate of 26%/year, and that this trend has been ongoing for decades.  26%/year happens to be the square root of Moore's Law, which is precisely what is to be expected, since to double resolution by halving the size of a pixel, one pixel has to be divided into four.  This is also why video game and CGI resolution rises at 26%/year.  

Rising telescope resolution enabled the first exoplanet to be discovered in 1995, and then a steady stream after 2005.  This estimated rate led me to correctly predict that the first Earth-like planets would be discovered by 2010-11, and that happened right on schedule.  But as with many such thresholds, after initial fanfare, the new status quo manifests and people forget what life was like before.  This leads to an continuous underestimation of the rate of change by the average person.

Histogram_Chart_of_Discovered_Exoplanets_as_of_2017-03-08Then, in May 2009, I published one of the most important articles ever written on The Futurist : SETI and the Singularity.  At that time, only 347 exoplanets were known, almost all of which were gas giants much larger than the Earth.  That number has grown to 3693 today, or over ten times as many.  Note how we see the familiar exponential curve inherent to every aspect of the ATOM.  Now, even finding Earth-like planets in the 'life zone' is no longer remarkable, which is another aspect of human psychology towards the ATOM - that a highly anticipated and wondrous advance quickly becomes a normalized status quo and most people forget all the previous excitement.   

1280px-KeplerHabitableZonePlanets-20170616The rate of discovery may soon accelerate further as key process components collapse in cost.  Recent computer vision algorithms have proven themselves to be millions of times faster than human examiners.  A large part of the cost of exoplanet discovery instruments like the Kepler Space Observatory is the 12-18 month manual analysis period.  If computer vision can perform this task in seconds, the cost of comparable future projects plummets, and new exoplanets are confirmed almost immediately rather than every other year.  This is another massive ATOM productivity jump that removes a major bottleneck in an existing process structure.  A new mission like Kepler would cost dramatically less than the previous one, and will be able to publish results far more rapidly.  

Given the 26%/year trendline, the future of telescopic discovery becomes easier to predict.  In the same article, I made a dramatic prediction about SETI and the prospects of finding extraterrestrial intelligence.  Many 'enlightened' people are certain that there are numerous extraterrestrial civilizations.  While I too believed this for years (from age 6 to about 35), as I studied the accelerating rate of change, I began to notice that within the context of the Drake equation, any civilization even slightly more advanced than us would be dramatically more advanced.  In terms of such a civilization, while their current activities might very well be indistinguishable from nature to us, their past activities might still be visible as evidence of their existence at that time.  This led me to realize that while there could very well be thousands of planets in our own galaxy that are slightly less advanced that us, it becomes increasingly difficult for there to be one more advanced than us that still manages to avoid detection.  Other galaxies are a different story, simply because the distance between galaxies is itself 10-20 times more than the diameter of the typical galaxy.  Our telescopic capacity is rising 26%/year after all, and the final variable of the Drake equation, fL, has risen from just 42 years at the time of Carl Sagan's famous clip in 1980, to 79 years now, or almost twice as long.  

Hence, the proclamation I had set in 2009 about the 2030 deadline (21 years away at the time) can be re-affirmed, as the 2030 deadline is now only 13 years away.  

2030

Despite the enormity of our galaxy and the wide range of signals that may exist, even this is eventually superseded by exponential detection capabilities.  At least our half of the galaxy will have received a substantial examination of signal traces by 2030.  While a deadline 13 years away seems near, remember that the extent of examination that happens 2017-30 will be more than in all the 400+ years since Galileo, for Moore's Law reasons alone.  The jury is out until then.  

(all images from Wikipedia or Wikimedia).  

 

Related Articles :

New Telescopes to Reveal Untold Wonders

SETI and the Singularity

Telescope Power - Yet Another Accelerating Technology

 

Related ATOM Chapters :

12. The ATOM's Effect on the Final Frontier

  

 

November 20, 2017 in Accelerating Change, ATOM AotM, Space Exploration, The Singularity | Permalink | Comments (122)

Tweet This! |

ATOM Award of the Month, September 2017

For September 2017, the ATOM AotM takes a very visual turn.  With some aspects of the ATOM, seeing is believing.    

Before photography, the only image capture was through sketches and paintings.  This was time-consuming, and well under 1% were prosperous enough to have even a single hand-painted portrait of themselves.  For most people, after they died, their families had only memories via which to imagine faces.  If portraits were this scarce, other images were even scarcer.  When image capture was this scarce, people certainly had no chance of seeing places, things, or creatures from far away.  It was impossible to know much about the broader world.    

The very first photograph was taken as far back as 1826, and black&white was the dominant form of the medium for over 135 years.  That it took so long for b&w to transition to color may seem quite surprising, but the virtually non-existent ATOM during this period is consistent with this glacial rate of progress.  The high cost of cameras meant that the number of photographs taken in the first 100 years of photography (1826-1926) was still an extremely small.  Eventually, the progression to color film seemed to be a 'completion' of the technological progression in the minds of most people.  What more could happen after that?  

But the ATOM was just getting started, and it caught up with photography around the turn of the century with relatively little fanfare, even though it was notable that film-based photography and the hassles associated with it were removed from the consumer experience.  The cost of film was suddenly zero, as was the transit time and cost from the development center.  Now, everyone could have thousands of photos, and send those over email endlessly.  Yet, standalone cameras still cost $200 as of 2003, and were too large to be carried around everywhere at all times.  

CamerasAs the ATOM progressed, digital cameras got smaller and cheaper, even as resolution continued to rise.  It was discovered that the human eye does in fact adapt to higher resolution, and finds previously acceptable lower resolution unacceptable after adapting to higher resolution.  Technology hence forces higher visual acuity and the associated growth of the brain's visual cortex.  

With the rise of the cellular phone, the ATOM enabled more and more formerly discrete devices to be assimilated into the phone, and the camera was one of the earliest and most obvious candidates.  The diffusion of this was very rapid, as we can see from the image that contrasts the 2005 vs. 2013 Papal inaugurations in Vatican City.  Before long, the cost of an integrated camera trended towards zero, to the extent that there is no mobile device that does not have one.  As a result, 2 billion people have digital cameras with them at all times, and stand ready to photograph just about anything they think is important.  Suddenly, there are countless cameras at every scene.  

But lest you think the ubiquity of digital cameras is the end of the story, you are making the same mistake as those who thought color photography on film in 1968 was the end of the road.  Remember that the ATOM is never truly done, even after the cost of a technology approaches zero.  Digital imaging itself is just the preview, for now we have it generating an ever-expanding pile of an even more valuable raw material : data.  

Images contain a large volume of data, particularly the data that associates things with each other (the eyes are to be above the nose, for example).  Data is one of the two fuels of Artificial Intelligence (the other being inexpensive parallel processing).  Despite over a decade of digital images being available on the Internet, only now are there enough of them for AI to draw extensive conclusions from them, and for Google's image search to be a major force in the refinement of Google's Search AI.  Most people don't even remember when Google added image search to its capabilities, but now it is hard to imagine life without it.  

Today, we have immediate access to image search that answers questions in the blink of an eye, and fosters even greater curiosity.  In a matter of seconds, you can look up images for mandrill teeth, the rings of Saturn, a transit of Venus over the Sun, the coast of Capri, or the jaws of Carcharocles Megalodon.  More searches lead to more precise recommendations, and more images continue to be added.  In the past, the accessibility of this information was so limited that the invaluable tangents of curiosity just never formed.  Hence, the creation of new knowledge speeds up.  The curious can more easily pull ahead of the incurious.  

Digital imaging is one of the primary transformations that built the Internet age, and is a core pillar of the impending ascent of AI.  For this reason, it receives the September 2017 ATOM AotM.    

 

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening

 

September 30, 2017 in Accelerating Change, Artificial Intelligence, ATOM AotM, Technology, The ATOM | Permalink | Comments (80)

Tweet This! |

Recent TV Appearances for The ATOM

I have recently appeared on a couple of television programs.  The first was Reference Point with Dave Kocharhook, as a two-part Q&A about The ATOM.


The next one was FutureTalk TV with Martin Wasserman, that included a 10-minute Q&A about The ATOM.

Inch-by-inch, we will get there.  The world does not have to settle for our current substandard status quo.

As always, all media coverage is available here.  

 

 

June 05, 2017 in Accelerating Change, Artificial Intelligence, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (24)

Tweet This! |

The Upgrade Paradox

There is an emerging paradox within the flow of technological diffusion.  The paradox is, ironically, that rapid progress of technology has constrained its own ability to progress further.  

Tech-adoption-usaWhat exactly is the meaning of this?  As we see from Chapter 3 of the ATOM, all technological products currently amount to about 2% of GDP.  The speed of diffusion is ever faster (see chart), and the average household is taking on an ever-widening range of rapidly advancing products and services.    

Refer to the section from that chapter, about the number of technologically deflating nodes in the average US household by decade (easily verified by viewing any TV program from that decade), and a poll for readers to declare their own quantity of nodes.  To revisit the same thing here :

Include : Actively used PCs, LED TVs and monitors, smartphones, tablets, game consoles, VR headsets, digital picture frames, LED light bulbs, home networking devices, laser printers, webcams, DVRs, Kindles, robotic toys, and every external storage device.  Count each car as 1 node, even though modern cars may have $4000 of electronics in them.

Exclude : Old tube TVs, film cameras, individual software programs and video games, films on storage discs, any miscellaneous item valued at less than $5, or your washer/dryer/oven/clock radio just for having a digital display, as the product is not improving dramatically each year.

 
 
The estimation of results that this poll would have yielded by decade, for the US, is :

1970s and earlier : 0

1980s : 1-2

1990s : 2-4

2000s : 5-10

2010s : 12-30

2020s : 50-100

2030s : Hundreds?

Herein lies the problem for the average household.  The cost to upgrade PCs, smartphones, networking equipment, TVs, storage, and in some cases the entire car, has become expensive.  This can often run over $2000/year, and unsurprisingly, upgrades have been slowing.  

The technology industry is hence a victim of its own success. By releasing products that cause so much deflation and hence low Nominal GDP growth and sluggish job growth, the technology industry has been constricting its own demand base.  Amidst all the job-loss through technological automation, the hiring of the tech industry itself is constrained if fewer people can keep buying their products.  If the bottom 70-80% of US household income brackets can no longer keep up with technological upgrades, their ability to keep up with new economic opportunities will suffer as well.  

This is why monetization of technological progress into a dividend is crucial, which is where the ATOM Direct Universal Exponential Stipend (DUES) fits in.  It is so much more than a mere 'basic income', since it is directly indexed to the exact speed to technological progress.  As of April 2017, the estimated DUES amount in the US is $500/month (up from $400/month back in February 2016 when the ATOM was first published).  A good portion of this cushion enables faster technology upgrades and more new adoption.  

 

April 16, 2017 in Accelerating Change, Technology, The ATOM | Permalink | Comments (21)

Tweet This! |

Mortgages : The Ultimate FinTech Disruption

When people think of FinTech, they think of a few things like peer-to-peer lending, payment companies, asset management firms, or maybe even cryptocurrencies. But one of the most outdated yet burdensome costs in all of finance, spread across the widest range of people, is still overlooked. The mortgage lending process is heavily padded with fees that are remnants of a bygone age.

Enter the ATOM.  

First, we must begin with the effect of technology on short-term interest rates. The Fed Funds rate was close to zero for several years, and it is apparent that any brief increase in rates by the Federal Reserve will swiftly be reversed once markets punish the move in subsequent months. We are in an age of accelerating and exponential technological deflation, and not only will the Fed Funds rate have to be zero forever, but money-printing will be needed to offset deflation. This process has already been underway for years, and is not yet recognized as part of the long term trend of technological progress. 

A 30-year mortgage was the standard format for decades, with a variable-rate mortgage seen as risky after a borrower locks in a low rate on their 30-year mortgage. But when the Fed Funds rate was at nearly zero, the LIBOR (London Interbank Offer Rate) hovered around 0.18% or so. If you get a variable-rate mortgage, then the rate is calculated off of the LIBOR, with an additional premium levied by the lending institution. This premium is about 1.5% or more. When the LIBOR rate was over 3% not too many years ago, the lender premium was only a third of the mortgage, but now, it is 85-90% of the mortgage. So instead of paying 0.18%, the lender pays 1.7%. This huge buffer represents one of the most attractive areas for FinTech to disrupt, as what was once a secondary cost is now the overwhelmingly dominant padding, itself a remnant of a bygone age. 

When almost 90% of the interest charged in a mortgage merely represents the value that the lending institution provides, we can examine the components of this and see which of those could be replaced with a lower cost technological alternative. The lender, such as a major bank, provides a brand name, a mortgage officer to meet with face-to-face, and other such provisions. All of this is either unnecessary, or can be provided at much lower cost with the latest technologies. For example, blockchains can ensure the security aspects of the mortgage transaction are robust. Online consumer review services can provide an extra layer of reputational buttressing to any innovative new lending platform. The rationale for such a hefty mortgage markup over the underlying interest rate is just no longer there. 

If the lender premium in a mortgage falls from 85-90% down to, say, 50%, then the rate on an adjustable rate mortgage will decline to just twice the LIBOR, or about 0.4%. Even thought the Federal Reserve has recently increased the Fed Funds rate, this is very temporary, and 0% will be the Fed Funds rate for the majority of the forseeable future, just as it has been for the last 9 years. 

When this sort of ATOM-derived cost savings on interest payments percolates through the economy, it will cause a series of disruptions that will greatly reduce one of the last main consumer expenditures not yet being attacked by technology. Housing costs have risen above the inflation rate in many major cities, against the grain of technology. This is unnatural, since a home does not spontaneously renovate itself, get bigger, or otherwise increase in inherent value. On the contrary, the materials deteriorate over time, so the value should fall. Yet, home prices rise despite these structural forces, due to artificial decisions to restrict supply, lower bond yields through QE, etc. This artificial propping up of home prices masks the excessive costs in the industry, particularly in the mortgage-lending sector. As Fintech irons out the aforementioned outdated expenses in the mortgage-lending process, many fundamental assumptions about home ownership will change. 

Home ownership is a very emotional concept for many buyers (which is why there is a widespread misconception that a person 'owns' their home even while they are making mortgage payments on it, when in reality, ownership is achieved only when the mortgage is fully paid off). This emotion obscures the high costs of obsolete products and procedures that continue to reside in the mortgage industry. 

Amidst all the technological disruptions we have seen within the last generation, most people still don't understand that the central origin of most disruptions is an outdated, expensive incumbent system. But the FinTech wing of the ATOM has started the 'cracks in the dam' process against a very substantial and widely-levied cost, and this may be the disruption that brings FinTech's dividends to the masses. 

 

March 08, 2017 in Accelerating Change, Economics, The ATOM | Permalink | Comments (17)

Tweet This! |

ATOM Award of the Month, February 2017

After the inaugural award in January, a new month brings a new ATOM AotM.  This time, we go to an entirely different sector than we examined last time.  The award for this month goes to the collaboration between the Georgia Institute of Technology, Udacity, and AT&T to provide a fully accredited Masters of Science in Computer Science degree, for the very low price of $6700 on average. 

The disruption in education is a topic I have written about at length.  In essence, most education is just a transmission of commoditized information, that, like every other information technology, should be declining in cost.  However, the corrupt education industry has managed to burrow deep into the emotions of its customers, to such an extent that a rising price for a product of stagnant (often declining) quality is not even questioned.  For this reason, education is in a bubble that is already in the process of deflating.  

What the MSCS at GATech accomplishes is four-fold :

  • Lowering the cost of the degree by almost an order of magnitude compared to the same degree as similarly-ranked schools
  • Making the degree available without relocation to where the institution is physically located
  • Scaling the degree to an eventual intake of 10,000 students, vs. just 300 that can attend a traditional in-residence program at GATech
  • Establishing best practices for other departments at GATech, and other institutions, to implement in order to create a broader array of MOOC degree programs

After a slow start, enrollment now is reported to be over 3300 students, representing a significant fraction of students presently studying MS-level computer science at equal or higher ranked schools.  The only reason enrollment has not risen all the way up to the full 10,000 is due to insufficient resourcefulness in shopping around and implementing ATOM principles to greatly increase one's living standards through ATOM means.  Aside from perhaps the top two schools like MIT and Stanford, there is perhaps no greater value for money than the GATech MSCS, which will become apparent as the slower adopters drift towards the program, particularly from overseas.  

Eventually, the sheer size of enrollment will rapidly lead to GATech becoming a dominant alumni community within computer science, forcing other institutions to catch up.  When this competition lowers costs even further, we will see one of the most highly paid and future-proof professions being accessible at little or no cost.  When contrasted to the immense costs of attending medical or law school, many borderline students will pursue computer science ahead of professions with large student debt burdens, creating a self-reinforcing cycle of ever-more computer science and ATOM propagation.  The fact that one can enroll in the program from overseas will attract many students from countries that do not even have schools of GATech's caliber (i.e. most countries), generating local talent despite remote education.  

Crucially, this is strong evidence of how the ATOM always finds new ways to expand itself, since the field most essential to the feeding of the ATOM, computer science, is the one that found a way to greatly increase the number of people destined to work in it, by attacking both cost thresholds and enrollment volumes.  This is not a coincidence, because the ATOM always finds a way around anything that is inhibiting the growth of the ATOM, in this case, access to computer science training.  Subsequent to this, the ATOM can increase the productivity of education even in less ATOM-crucial fields medicine, law, business, and K-12, since the greatly expanded size of the computer science profession will provide entrepreneurs and expertise to make this happen.  This is how the ATOM captures an ever-growing share of the economy into rapidly-deflating technological fundamentals.   

As always, the ATOM AotM succeeds through reader suggestions, so feel free to suggest candidates.  Criteria include the size and scope of the disruption, how anti-technology the disrupted incumbent was, and an obvious improvement in the quality of a large number of lives through this disruption.  

Related :

The Education Disruption : 2015

11. Implementation of the ATOM Age for Individuals 

 

February 26, 2017 in Accelerating Change, ATOM AotM, Computing, Technology | Permalink | Comments (8)

Tweet This! |

ATOM Award of the Month, January 2017

With the new year, we are starting a new article series here at The Futurist.  The theme will be a recognition of exceptional innovation.  Candidates can be any industry, corporation, or individual that has created an innovation exemplifying the very best of technological disruption.  The more ATOM principles exhibited in an innovation (rising living standards, deflation acting in proportion to prior inflation in the incumbent industry, rapid continuous technological improvement, etc.), the greater the chance of qualification.

Fracking BreakevensThe inaugural winner of the ATOM Award of the Month is the US hydraulic fracturing industry.  While 'fracking' garnered the most news in 2011-13, the rapid technological improvements continued.  Natural gas continues to hover around just $3, making the US one of the most competitive countries in industries where natural gas is a large input.  Oil prices continue to fall due to ever-improving efficiencies, and from the chart, we can see how many of the largest fields have seen breakevens fall from $80 to under $40 in just the brief 2013-16 period.  This is of profound importance, because now even $50 is a profitable price for US shale oil.  There is no indication that this trend of lower breakeven prices has stopped.  Keep in mind that the massive shale formations in California are not even being accessed yet due to radical obstruction, but a breakeven of $30 or lower ensure the pressure to extract this profit from the Monterrey shale continues to rise.  Beyond that, Canada has not yet begun fracking of its own, and when it does, it will certainly have at least as much additional oil as the US found.  

This increase, which is just an extra 3M barrels/day to US supply, was nonetheless enough to capsize this highly elastic market and crash world oil prices from $100+ to about $50.  Given the improving breakevens, and possibility of new production, this will continue to pressure oil prices for the foreseeable future.  This has led to the US turning the tables on OPEC and reversing a large trade deficit into what is now a surplus.   OPEC Trade DeficitIf you told any of those 'peak oil' Malthusians that the US would soon have a trade surplus with OPEC, they would have branded you as a lunatic.  Note how that ill-informed Maoist-Malthusian cult utterly vanished.  Furthermore, this plunge in oil prices has strengthened the economies of other countries that import most of their oil, from Japan to India.  

Under ATOM principles, technology always finds a way to lower the cost of something that has become artificially expensive and is hence obstructing the advancement of other technologies.  Oil was a premier example of this, as almost all technological innovation is done in countries that have to import large portions of their oil, while almost none is done by oil exporters.  Excess wealth accumulation by oil exporters was an anti-technology impediment, and demanded the attention of a good portion of the ATOM.  Remember that the worldwide ATOM is of an ever rising size, and comprises of the sum total of all technological products in production at a given time (currently, about 2% of world GDP).  Hence, all technological disruptions are interconnected, and when the ATOM is freed up from the completion of a certain disruption, that amount of disruptive capacity becomes available to tackle something new.  Given the size of this disruption to oil prices and production geography, this occupied a large portion of the ATOM for a few years, which means a lot of ATOM capacity is now free to act elsewhere.

This disruption was also one of the most famous predictions of mine here at The Futurist.  In 2011, I predicted that high oil prices was effectively a form of burning a candle at both ends and such prices were jolting at least six compensating technologies into overdrive.  I provided an equation predicting when oil would topple, and it toppled well in accordance with that prediction (even sooner than the equation estimated).  

This concludes our very first ATOM AotM to kick off the new year.  I need candidate submissions from readers in order to get a good pool to select from.  Criteria include the size and scope of the disruption, how anti-technology the disrupted incumbent was, and an obvious improvement in the quality of a large number of lives through this disruption.  

 

January 31, 2017 in Accelerating Change, ATOM AotM, Energy, Technology, The ATOM | Permalink | Comments (36)

Tweet This! |

Google Talk on the ATOM

Kartik Gada had a Google Talk about the ATOM :  

 

December 26, 2016 in Accelerating Change, Artificial Intelligence, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (25)

Tweet This! |

Artificial Intelligence and 3D Printing Market Growth

3D Printing Market AI MarketI came across some recent charts about the growth of these two unrelated sectors, one disrupting manufacturing, the other disrupting software of all types (click to enlarge).  On one hand, each chart commits the common error of portraying smooth parabola growth, with no range of outcomes in the event of a recession (which will surely happen well within the 8-year timelines portrayed, most likely as soon as 2017).  On the other hand, these charts provide reason to be excited about the speed of progress seen in these two highly disruptive technologies, which are core pillars of the ATOM.  

This sort of growth rate across two quite unrelated sectors, while present in many prior disruptions, is often not noticed by most people, including those working in these particular fields.   Remember, until recently, it took decades or even centuries to have disruptions of this scale, but now we see the same magnitude of transformation happen in mere years, and in many pockets of the economy.  This supports the case that all technological disruptions are interconnected and the aggregate size of all disruptions can be calculated, which is a core tenet of the ATOM.   

Related :

3.  Technological Disruption is Pervasive and Deepening 

 

November 21, 2016 in Accelerating Change, Artificial Intelligence, Technology, The ATOM | Permalink | Comments (3)

Tweet This! |

The Federal Reserve Continues to Ignore Technological Deflation

Dot PlotThe recent FOMC meetings continue to feature a range of debate only around the rate at which the Fed Funds rate can be increased up to about 4% (which has not coincided with a robust economy since the late 1990s).  They actually describe this as a 'normal' rate, and the process of raising the rate as 'normalization'.  The 'Dot Plot' pictured here indicates the paradigm that the Federal Reserve still believes.  Even the most 'dovish' members still think that the Fed Funds rate will be above 2% by 2019.  

This is dangerously inaccurate.  At the start of 2016, the Federal Reserve expected that they will do four rate likes this year itself.  Now they are down to an expectation of just two (one more than the one early in this year), and may just halt with one.  How can a collection of supposedly the best and wisest economic forecasters be so consistently wrong?  A 20% stock market correction will lead to a swift rate reversal and a 25%+ correction will lead to a resumption of QE in excess of $100B/month.  

As we can see in the ATOM e-book, technological deflation is endless and exponentially increasing, and hence the Wu-Xia shadow rate indicates the natural Fed Funds rate for the US to be around the equivalent -2%.  Yes, minus two percent, achieved through the various rounds of QE that have been done to date in order to simulate a negative interest rate.  The US stopped its QE in 2014, but continues to be held afloat by a portion of the $220B/month of worldwide central bank easing that flows into the US. This is barely enough to keep US Nominal GDP (NGDP) growth at 3%, which is far below the level at which innovation can proceed at its trendline rate.  The connection between technological progress, technological deflation, and worldwide central bank action is still not being discovered by decision-makers.  

The -2% indicated by the Wu-Xia shadow rate might be as deep as -4% by 2025, under current trends of technological diffusion.  The worldwide central bank easing required to halt deflation by that time will be several times higher than today.  As per the ATOM policy reform recommendations, this can be an exceptionally favorable thing if the fundamentals are recognized.  

For the full analysis and thesis, read the ATOM e-book.  

 

Related ATOM Chapters :

4.  The Overlooked Economics of Technology

6. Current Government Policy Will Soon Be Ineffective

7. Government Policies Must Adapt, and Quickly

10. Implementation of the ATOM Age for Nations

 

September 22, 2016 in Accelerating Change, Economics, The ATOM | Permalink | Comments (33)

Tweet This! |

Invisible Disruptions : Deep Learning and Blockchain

In the ATOM e-book, we examine how technological disruption can be measured, and how the aggregate disruption ongoing in the world at any given time continues along a smooth, exponentially rising trendline.  Among these, certain disruptions are invisible to most onlookers, because a tangential technology is simultaneously disrupting seemingly unrelated industries from an orthogonal direction.  In that vein, here are two separate lists of industries that are being disrupted, one by Deep Learning and the other by Blockchain.    

13 Industries Using Deep Learning to Innovate. 

20 Industries that Blockchain could Disrupt

Technology-Adoption

Note how many industries are present in both of the above lists, meaning that the sectors have to deal with compound disruptions from more than one direction.  

In addition, we see that sectors where disruption was artificially thwarted due to excessive regulation and government protectionism merely see a sharper disruption, higher up in the edifice.  When the disruption arrives through abstract technologies such as Deep Learning and Blockchain, the incumbents are unlikely to be able to thwart it, due to the source of the disruption being effectively invisible to the untrained eye.  What is understood by very few is that the accelerating rate of adoption/diffusion, depicted in this chart here from Blackrock, is enabled by such orthogonal forces that are not tied to any one product category or even industry.  

Related ATOM Chapters :

Technological Disruption is Pervasive and Deepening

The Overlooked Economics of Technology

 

September 13, 2016 in Accelerating Change, Technology, The ATOM | Permalink | Comments (9)

Tweet This! |

New Telescopes to Reveal Untold Wonders

A number of new telescopes are soon going to be entered into service, all of which are far more powerful than equivalent predecessors.  This is fully expected by any longtime reader of The Futurist, for space-related articles have been a favorite theme here.  

To begin, refer to the vintage 2006 article where I estimated telescope power to be rising at a compound annual rate of approximately 26%/year, although that is a trendline of a staircase with very large steps.  This, coincidentally, is exactly the same rate at which computer graphics technology advances, which also happens to be the square root of Moore's Law's rate of progress.  According to this timeline, a wave of powerful telescopes arriving now happens to be right on schedule.  Secondly, refer to one of the very best articles on The Futurist, titled 'SETI and the Singularity', where the impact of increasing telescopic power is examined.  The exponential increase in the detection of exoplanets (chart from Wikipedia), and the implications for the Drake Equation, are measured, with a major prediction about extraterrestrial life contained therein.  

UntitledBuilding on that, in the ATOM e-book, I detail how accelerating technological progress has a major impact on space exploration.  Contrary to a widely-repeated belief that space exploration has plateaued since the Apollo program, technology has ensure that quite the opposite is true.  Exoplanet detection is now in the hundreds per year (and soon to be in the thousands), even as technologies such as 3D Printing in space and asteroid mining are poised to generate great wealth here on Earth.  With space innovation no longer exclusively the domain of the US, costs have lowered through competition. India has launched a successful Mars orbiter at 1/10th the cost of equivalent US or Russian programs, which has been in operation for two years.  

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening

12. The ATOM's Effect on the Final Frontier

 

 

August 28, 2016 in Accelerating Change, Space Exploration, The ATOM | Permalink | Comments (6)

Tweet This! |

Artificial Intelligence Finally Disrupting Medicine

The best news of the last month was something that most people entirely missed.  Amidst all the distractions and noise that comprises modern media, a quiet press release discloses that a supercomputer has suddenly become more effective than human doctors in diagnosing certain types of ailments.  

IBM's Watson correctly diagnoses a patient after doctors are stumped.

This is exceptionally important.  As previously detailed in Chapter 3 of The ATOM, not only was a machine more competent than an entire group of physicians, but the machine continues to improve as more patients use it, which in turn makes it more attractive to use, which enables the accrual of even more data upon which to improve further.  

But most importantly, a supercomputer like Watson can treat patients in hundreds of locations in the same day via a network connection, and without appointments that have to be made weeks in advance.  Hence, such a machine replaces not one, but hundreds of doctors.  Furthermore, it takes very little time to produce more Watsons, but it takes 30+ years to produce a doctor from birth, among the small fraction of humans with the intellectual ability to even become a physician.  The economies of scale relative to the present doctor-patient model are simply astonishing, and there is no reason that 60-80% of diagnostic work done by physicians cannot soon be replaced by artificial intelligence.  This does not mean that physicians will start facing mass unemployment, but rather than the best among them will be able to focus on more challenging problems.  The most business-minded of physicians can incorporate AI into their practice to see a greater volume of patients on more complicated ailments.  

This is yet another manifestation of various ATOM principles, from technologies endlessly crushing the cost of anything overpriced, to self-reinforcing improvement of deep learning.  

Related :  Eight paraplegics take their first step in years, thanks to robotics.  

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening

4. The Overlooked Economics of Technology

 

 

August 14, 2016 in Accelerating Change, Biotechnology, Technology, The ATOM | Permalink | Comments (4)

Tweet This! |

Tesla's Rapid Disruption

MIT Technology Review has an article describing how Tesla Motors has brought rapid disruption to the previously staid auto industry, where there are too many factors precluding the entry of new companies.  But this is nothing new for readers of The Futurist, as I specifically identified Tesla as a key candidate for disruption way back in 2006.  In Venture Capital terms, this was an exceptionally good pick such an early stage.  

In ATOM terms, the progress of Tesla is an example of everything from how all technological disruptions are interlinked, to how each disruption is deflationary in nature.  It is not just about the early progress towards electric cars, removal of the dealership layer of distribution, or the recent erratic progress of semi-autonomous driving.  Among other things, Tesla has introduced lower-key but huge innovations such as remote wireless software upgrades of the customer fleet, which itself is a paradigm shift towards rapidly-iterating product improvement.  In true ATOM form, the accelerating rate of technological change is beginning to sweep the automobile along with it.  

When Tesla eventually manages to release a sub-$35,000 vehicle, the precedents set in dealership displacement, continual wireless upgrades, and semi-autonomous driving will suddenly all be available across hundreds of thousands of cars, surprising unprepared observers but proceeding precisely along the expected ATOM trajectory.  

July 12, 2016 in Accelerating Change, Energy, Technology, The ATOM | Permalink | Comments (3)

Tweet This! |

Economic Growth is Exponential and Accelerating, or is it?

Years ago from 2050-LinearChapter 2 of the ATOM e-book addresses the centuries-old accelerating trendline of economic growth.  Recall that this was the topic of an article of mine almost exactly 9 years ago as well.  

However, there may be more nuances to this concept than previously addressed.  It may be that since GDP is a human construct, it only happens to be correlated to the accelerating rate of change by virtue of humans being the forefront of advancing intelligence.  It could be that once artificial intelligence can advance without human assistance, most types of technology that improve human living standards may stagnate, since the grand goal of propagating AI into space is no longer bottlenecked by human progress.  Humans are certainly not the final state of evolution, as evidenced by the much greater suitability of AI for space exploration (AI does not require air or water, etc.).  

That is certainly something to think about.  Human progress may only be on an accelerating curve until a handoff to AI is completed.  After that, metrics quite different than GDP may be the best to measure progress, as the AI perhaps only cares about computational density, TERAFLOPs, etc.  

 

July 04, 2016 in Accelerating Change, Economics, The ATOM | Permalink | Comments (1)

Tweet This! |

The Technological Progress of Video Games, Updated

A decade ago, in the early days of this blog, we had an article tracking video game graphics at 10-year intervals.  As per that cadence, it is time to add the next entry to the progression.  

The polygons in any graphical engine increase as a square root of Moore's Law, so the number of polygons doubles every three years.  

Sometimes, pictures are worth thousands of words :

1976 :

Pong

1986 :

Enduro_Racer_Arcade

1996 :

Tomb_raider_tomb_of_qualopec

2006 :

Visiongt20060117015740218

I distinctly remember when the 2006 image looked particularly impressive.  But now, it no longer does.  This inevitably brings us to...

2016 (an entire video is available, with some gameplay footage) : 

 

This series illustrates how progress, while not visible over one or two years, accumulates to much more over longer periods of time.   

Now, extrapolating this trajectory of exponential progress, what will games bring us in 2020?  or 2026?  Additionally, note that screen sizes, screen resolution, and immersion (e.g. VR goggles) have risen simultaneously.  

 

April 01, 2016 in Accelerating Change, Computing, Technology | Permalink | Comments (6)

Tweet This! |

The End of Petrotyranny - Victory

I refer readers back to an article written here in 2011, titled 'The End of Petrotyranny', where I claimed that high oil prices were rapidly burning through the buffer that was shielding oil from technological disruption.  I quantified the buffer in an equation, and even provided a point value to how much of the buffer was still remaining at the time.

I am happy to declare a precise victory for this prediction, with oil prices having fallen by two-thirds and remaining there for well over a year.  While hydraulic fracturing (fracking) turned out to be the primary technology to bring down the OPEC fortress, other technologies such as photovoltaics, batteries, and nanomaterials contributed secondary pressure to the disruption.  The disruption unfolded in accordance with the 2011 Law of Finite Petrotyranny :

From the start of 2011, measure the dollar-years of area enclosed by a chart of the price of oil above $70.  There are only 200 such dollar-years remaining for the current world petro-order.  We can call this the 'Law of Finite Petrotyranny'. 

Go to the original article to see various scenarios of how the dollar-years could have been depleted.  While we have not used up the full 200 dollar-years to date, the range of scenarios is now much tighter, particularly since fracking in the US continues to lower its breakeven threshold.  At present, over $2T/year that was flowing from oil importers to oil producers, has now vanished, to the immense benefit of oil importers, which are the nations that conduct virtually all technological innovation.  

The 2011 article was not the first time this subject of technological pressure rising in proportion to the degree of oil price excess has been addressed here at The Futurist.  There were prior articles in 2007, as well as 2006 (twice).  

As production feverishly scales back, and some of the less central petrostates implode, oil prices will gradually rise back up, generally saturating at the $70 level (itself predicted in 2006) in order to deplete the remaining dollar-years.  But we may never again see oil at such a high price relative to world GDP, as existed from most of 2007-14 (oil would have to be $200+/barrel today to surpass the record of $147 set in 2008, in proportion to World GDP).

 

March 08, 2016 in Accelerating Change, Economics, Energy, Technology | Permalink | Comments (22)

Tweet This! |

Two Overdue Technologies About to Arrive

The rate of technological change has been considerably slower than its trendline ever since the start of the 21st century.  I wrote about this back in 2008, but at the time, I did not have quite as advanced techniques of observing and measuring the gap between the rate of change and the trendline, as I do now.

The dot-com bust coincided with a trend toward lower nominal GDP (since everyone wrongly focuses on 'real' GDP, which has less to do with real-world decisions than nominal GDP), and this has led to technological change, despite sporadic bursts, generally progressing at what is currently only 60-70% of its trendline rate.  For this reason, may technologies that seemed just 10 years away in 2000, have still not arrived as of 2014.  I will write much more on this at a later date.

But for now, two overdue technologies are finally plodding towards where many observers thought they would have been by 2010.  Nonetheless, they are highly disruptive, and will do a great deal to change many industries and societies.  

1) Artificial Intelligence : 

A superb article by Kevin Kelly in Wired Magazine describes how three simultaneous breakthroughs have greatly accelerated the capabilities of Artificial Intelligence (AI).  Most disruptions are usually the result of two or more seemingly unrelated technologies both crossing certain thresholds, and observers tend to be surprised because each group of observers was following only one of the technologies.  For example, the iPod emerged when it did because storage, processing, and the ability to store music as software all reached certain cost, size, and power consumption limits at around the same time.  

What is interesting about AI is how it can greatly expand the capabilities of those who know know to incorporate AI with their own intelligence.  The greatest chess grandmaster of all time, Magnus Carlssen, became so by training with AI, and it is unclear that he would have become this great if he lived before a time when such technologies were available.  

The recursive learning aspect of AI means that an AI can quickly learn more from new people who use it, which makes it better still.  One very obvious area where this could be used is in medicine.  Currently, millions of MD general practitioners and pediatricians are seen by billions of patients, mostly for relatively common diagnostics and treatments.  If a single AI can learn enough from enough patient inputs to replace most of the most common diagnostic capabilities of doctors, then that is a huge cost savings to patients and the entire healthcare system.  Some doctors will see their employment prospects shrink, but the majority will be free to move up the chain and focus on more serious medical problems and questions.  

Another obvious use is in the legal system.  On one hand, while medicine is universal, the legal system of each country is different, and lawyers cannot cross borders.  On the other hand, the US legal system relies heavily on precedent, and there is too much content for any one lawyer or judge to manage, even with legal databases.  An AI can digest all laws and precedents and create a huge increase in efficiency once it learns enough.  This can greatly reduce the backlog of cases in the court system, and free up judicial capacity for the most serious cases.  

The third obvious application is in self-driving cars.  Driving is an activity where the full range of possible traffic situations that can arise is not a particularly huge amount of data.  Once an AI gets to the point where it analyzes every possible accident, near-accident, and reported pothole, it can easily make self-driving cars far safer than human driving.  This is already being worked on at Google, and is only a few years away.  

Get ready for AI in all its forms.  While many jobs will be eliminated, this will be exceeded by the opportunity to add AI into your own life and your own capabilities.  Make your IQ 40 points higher than it is when you need it most, and your memory thrice as deep - all will be possible in the 2020s for those who learn to use these capabilities.  In fact, being able to augment your own marketable skills through the use of AI might become one of the most valuable skillsets for the post-2025 workforce.   

2) Virtual Reality/Augmented Reality : 

Longtime readers recall that in 2006, I correctly predicted that by 2012-13, video games would be a greater source of entertainment than television.  Now, we are about to embark on the next phase of this process, as a technology that has had many false starts for over 20 years might finally be approaching reality.  

Everyone knows that the Oculus Rift headset will be released to the consumer in 2015, and that most who have tried it has had their expectations exceeded.  It supposedly corrects many of the previous problems of other VR/AR technologies that have dogged developers for two decades, and has a high resolution.  

But entertainment is not the only use for a VR/AR headset like the Oculus Rift, for the immersve medium that the device facilitates has tremendous potential for use in education, military training, and all types of product marketing.  Entirely new processes and business models will emerge.       

One word of caution, however.  My decade of direct experience with running a large division of a consumer technology company compels me to advise you not to purchase any consumer technology product until it is in its third generation of consumer release, which is usually 24-48 months after initial release.  The reliability and value for money are usually not compelling until Gen three.  Do not mistake fractional generations (i.e. 'version 1.1', or 'iPhone 5, 5S, and 5C) for actual generations.  Thre Oculus Rift may be an exception to this norm (as are many Apple products), but in general, don't be an early adopter on the consumer side.  

Update (5/27/2016) : The same Kevin Kelly has an equally impressive article about VR/AR.  

Combining the Two :

Imagine, if you would, that the immersive movies and video games of the near future are not just fully actualized within the VR of the Oculus Rift, but that the characters of the video game adapt via connection to some AI, so that game characters far too intelligent to be overcome by hacks and cheat codes emerge.  

Similarly, imagine if various forms of training and education are not just improved via VR, but augmented via AI, where the program learns exactly where the student is having a problem, and adapts the method accordingly, based on similar difficulties from prior students.  Suffice it to say, both VR and AI will transform medicine from its very foundations.  Some doctors will be able to greatly expand their practices, while others find themselves relegated to obsolesence.  

Two overdue technologies, are finally on our doorstep.  Make the most of them, because if you don't, someone else surely is.  

Related :

The Next Big Thing in Entertainment 

Timing the Singularity

The Impact of Computing : 78%/year

 

December 21, 2014 in Accelerating Change, Computing, Technology | Permalink | Comments (28)

Tweet This! |

The Education Disruption : 2015

I was not going to write an article, except that this disruption is so imminent that if I wait any longer, this article would no longer be a prediction.  Long-time readers may recall how I have often said that the more overdue a disruption is, the more sudden it is when it finally occurs, and the more off-guard the incumbents are caught.  We are about to see a disruption in one of the most anti-productivity, self-important, and corrupt industries of them all, and not a moment too soon.  High-quality education is about to become more accessible to more people than ever before.  

The Natural Progression of Educational Efficiency : The great Emperor Charlemagne lived in a time when even most monarchs (let alone peasants) were illiterate.  Charlemagne had a great interest in attaining literacy for himself and fostering literacy on others.  But the methods of education in the early 9th century were primitive and books were handwritten, and hence scarce.  Despite all of his efforts, Charlemagne only managed to learn to read after the age of 50, and never quite learned how to write.  This indicates how hard it was to attain modern standards of basic literacy at the time.  

Over time, as the invention of the printing press enabled the mass production of books, literacy became less exclusive over the subsequent centuries, and methods of teaching that could teach the vast majority of six-year-old children how to read became commonplace, delivered en masse via institutions that came to be known as 'schools'.  Since most of us grew up within a mass-delivered classroom model with minimal customization, we consider this method of delivery to be normal, and almost every parent can safely assume that if their child has an IQ above 80 or so, that they will be able to read competently at the right age.  

But consider what the Internet age has made available for those who care to take it.  I can say with great certainty that the most valuable things I have learned have all been derived from the Internet, free of cost.  Whether it was the knowledge that led to new incomes streams, new social capital, or any other useful skills, it was available over the Internet, and that too in just the last decade.  Almost every challenge in life has an answer than can be found online.  This brings up the question of whether formal schooling, and the immense pricetag associated with it, is still the primary source from which a person can attain the most marketable skills.   

Why Education Became an Industry Prone to Attracting Inefficiency : To begin, we first have to address some of the adverse conditioning that most people receive, about what education is, what it should cost, and where it can be obtained.  Through centuries of marketing that preys on human insecurity at being left behind, and the tendency to conflate correlation with causation, an immense bubble has inflated over a multi-decade period, and is at its very peak.  

Education, which in the bottom 99.9% of classroom settings is really just the transmission of highly commoditized information, has usually correlated to greater economic prospects, especially since, until recently, very few people were likely to overtake the threshold beyond which further education would no longer have a tight correlation to greater earnings.  This is why many parents are willing to spare no expense on the education of their children, even to the extent of having fewer children than they might otherwise have had, when estimating the cost of educating them.  Exploiting the emotions of parents, the education industry manages to charge ever more money for a product that is often declining in quality, with surprisingly little questioning from their customers.  We are so accustomed to this unrelenting rise in costs at all levels of education that few people realize how highly perverse it is.  

Glenn Reynolds of Instapundit, with his books 'The Higher Education Bubble' and 'The K-12 Implosion', has been the earliest and most vocal observer of a bubble in the education industry.  The vast corruption and sexual misconduct by faculty in K-12 public schools is described in the latter of those two books, but over here, we will focus mostly on higher education.  

Among the dynamics he has described are how government subsidization of universities directly as well as of student loans enables universities to increase fees at a rate that greatly outstrips inflation, which in turn allows universities to hire legions of non-academic staff, many of whom exist only to politicize the university experience and further the goals of politicians and government bureaucrats.    

Student-loans-home-equity-credit-lines-auto-credit-card_chartAs a result, university degrees have gotten more expensive, while the salaries commanded by graduates have remained flat or even fallen.  The financial return of many university degrees no longer justifies their cost, and this is true not just of Bachelor's Degrees, but even of many MBA and JD degrees from any school ranked outside the Top 10 or even Top 5.  

Graduates often have as much as $200,000 in debt, yet have difficulty finding jobs that pay more than $50,000 a year.  Student loan debt has tripled in a decade, even while many universities now see no problem in departing from their primary mission of education, and have drifted into a priority of ideological brainwashing.  Combine all these factors, and you have a generation of young people who may have student debt larger than the mortgage on a median American house (meaning they will not be the first-time home purchasers that the housing market depends on to survive), while having their head filled with indoctrination that carries zero or even negative value in the private sector workforce.  

When you combine this erosion of value with the fact that it now takes just minutes to research a topic, from home and at any hour, that previously would have involved half a day at the public library, why should the same sort of efficiency gain not be true for more formal types of education that are actually becoming scarcer within universities?

Primed For Creative Destruction : Employers want skills, rather than credentials.  There may have been a time when a credential had a tight correlation with a skillset that an employer sought in a new hire, but that has weakened over time, given the dynamic nature of most jobs, and the dilution of rigor in attaining the credential that most degrees have become.  Furthermore, technology makes many skillsets obsolete, while creating openings for new ones.  With the exception of those with highly specialized advanced degrees, very few people over the age of 30 today, can say that the demands of their current job have much relevance to what they learned in college, or even what computing, productivity, and research tools they may have used in college.  Furthermore, anyone who has worked at a corporation for a decade or more is almost certainly doing a very different job than the one they were doing when they were first hired.  

Hence, the superstar of the modern age is not the person with the best degree, but rather the person who acquires the most new skills with the greatest alacrity, and the person with the most adaptable skillset.  A traditional degree has an ever-shortening half-life of relevance as a person's career progresses, and even fields like Medicine and Law, where one cannot practice without the requisite degree, will not be exempt from this loosening correlation between pedigree and long-term career performance.  Agility and adaptability will supercede all other skillsets in the workforce.    

Google, always leading the way, no longer mandates college degrees as a requirement, and has recently disclosed that about 14% of its employees do not have them.  If a few other technology companies follow suit, then the workforce will soon have a pool of people working at very desirable employers, who managed to attain their position without the time and expense of college.  If employers in less dynamic sectors still have resistance to this concept, they will find it harder to ignore the growing number of resumes from people who happen to be alumni of Google, despite not having the required degree.  As change happens on the margins, it will only take a small percentage of the workforce to be hired by prestigious employers.           

The Disruption Begins at the Top : Since this disruption is technological and almost entirely about software, perhaps the disruption has to originate where the people most directly responsible for the disruption exist.  The program that has the potential to slash the costs of entry into a major career category is an online Master of Science in Computer Science (MSCS) degree through a collaboration between the Georgia Institute of Technology, Udacity, and AT&T.  For an estimated cost of just $6700, this program can enroll 10,000 geographically dispersed students at once (as opposed to the mere 300 MSCS degrees per year that Georgia Tech was awarding previously).  This is a tremendous revolution in terms of both cost and capacity.  A degree that can make a graduate eligible for high-paying jobs in a fast-growing field, is now accessible to anyone with the ability to succeed in the program.  The implications of this are immense.  

For one thing, this profession, which happens to be one with possibly the fastest-growing demand, has itself found a way to greatly increase the influx of new contributors to the field.  By removing both cost and geographical location, the program competes not just with brick and mortar MSCS programs, but with other degrees as well.  Students who may have otherwise not considered Computer Science as a career at all, may now choose it simply due to the vastly lower cost of preparation relative to similarly high-paying careers like other forms of engineering, law, or medicine.  Career changers can jump the chasm at lower risk than before, for the same reasons.  

As fields similarly suitable to remote learning (say, systems engineering, mathematics, or certain types of electrical engineering) see MOOC degree programs created for them, more avenues open up.  Fields where education can be more easily transmitted to this model will see an inherent advantage over fields that cannot be learned this way, in terms of attracting talent.  These fields in turn grow in size, becoming a larger portion of the economy, and creating even more demand for new entrants above a certain competence threshold.  

But these fields are still not the 'top' echelon of professional excellence.  The profession that is the most widespread, most dynamic, most durable, and has created the greatest wealth, is one that universities almost never do a good job of teaching or even discussing : that of entrepreneurship.  I have stated before that the ever-increasing variety of technological disruption means that the foremost career of the modern era is that of the serial entrepreneur.  If universities are not the place where the foremost career can be learned, then how important are formal degrees from these universities?  Since each entrepreneurial venture is different, the individual will have to synthesize a custom solution from available components.  

Multi-Faceted Disruption : As The Economist has noted, MOOCs have not yet unleashed a 'gale of Schumpeterian creative destruction' onto universities.  But this is still a conflation of the degree and the knowledge, particularly when the demands of the economy may shift many times during a person's career.  Udacity, Coursera, MITx, Khan Academy, and Udemy are just a few of the entities enabling low-cost education at all levels.  Some are for-profit, some are non-profit.  Some address higher education, and some address K-12 education.  Some count as credit towards degrees, and some are not intended for degree-granting, but rather for remedial learning.  But among all these websites, an innovative pupil can learn a variety of seemingly unrelated subjects and craft an interlocking, holistic education that is specific to his or her goals.  

When the sizes and shapes of education available online has so much variety, many assumptions about who has what skills will be challenged.  There will be too many counterexamples against the belief that a certain degree qualifies a person for a certain job.  Furthermore, the standardization of resumes and qualifications that the paradigm of degrees creates has gone largely unchallenged.  People who are qualified in two or more fields will be able to cast a wider net in their careers, and entrepreneurs seeking to enter a new market can get up to speed swiftly.  

Scale to the Topmost Educators : There was a time when music and video could not be recorded.  Hundreds of orchestras across a nation might be playing the same song, or the same play might be performed by hundreds of thespians at the same time.  Recording technologies enabled the most marketable musicians and actors to reach millions of customers at once, benefiting them and the consumer, while eliminating the bottom 99% of workers in these professions.  Consumers and the best producers benefitted, while the lesser producers could no longer justify their presence in the marketplace and had to adapt.

The same will happen to teachers.  It is not efficient for the same 6th-grade math or 8th grade biology to be taught by hundreds of thousands of teachers across the English-speaking world each year.  Instead, technology will enable scale and efficiency.  The best few lectures will be seen by all students, and it is quite possible that the best teacher, as determined by market demand, earns far more than one currently thinks a teacher can earn.  The rise of the 'celebrity teacher' is entirely possible, when one considers the disintermediation and concentration that has already happened with music and theatrical production.  This sort of competition will increase quality that students receive, and ensure renumeration is more closely tied to teacher caliber.  

Conclusion : It is not often that we see something experience a dramatic worsening in cost/benefit ratio while competitive alternatives simultaneously become available at far lower costs than just a few years prior.  When a status quo has existed for the entire adult lifetime of almost every American alive today, people fail to contemplate the peculiarity of spending as much as the cost of a house on a product of highly variable quality, very uncertain payoff, and very little independent auditing.  The degree of outdatedness in the assumption that paying a huge price for a certain credential will lead to a certain career with a certain level of earnings means the edifice will topple far more quickly than many people are prepared for.  

2015 is a year that will see the key components of this transformation fall into place.  Some people will be enter the same career while spending $50,000 less on the requisite education, than they may have expected.  Many colleges will shrink their enrollments or close their doors altogether.  The light of accountability will be shone on the vast corruption and ideological extremism present in some of the most expensive institutions (Moody's has already downgraded the outlook of the entire US higher education industry).  But most importantly, the most valuable knowledge will become increasingly self-taught from content available to all, and the entire economy will begin the process of adjusting to this new reality.  

See Also : 

The Carnival of Creative Destruction

July 23, 2014 in Accelerating Change, Core Articles, Technology | Permalink | Comments (59)

Tweet This! |

The End of Petrotyranny

As oil prices remain high, we once again see murmurs of anticipated doom from various quarters.  Such fears are grossly miscalculated, as I have described in my 2007-08 articles about how oil at $120/barrel creates desirable chain reactions, as well as my rebuttal to the poorly considered beliefs of peak oil alarmists, who seem capable of being sold not one, but two bridges in Brooklyn.  Today, however, I am going to combine the concepts in both of those articles with some new analysis I have done to enable us to predict when oil will lose the economic power it currently holds.  You are about to see that not only are peak oil alarmists wrong, but they are just about as wrong as those predicting in 1988 that the Soviet Union would soon dominate the world, and will soon be equally worthy of ridicule.

Unenlightened Punditry and Fashionable Posturing :

As I mentioned in a previous article, many observers incessantly contradict themselves on whether they want oil to be inexpensive, or whether they want higher oil prices to spur technological innovations.  One of the most visible such pundits is Thomas Friedman, who has many interesting articles on the subject, such as his 2007 piece titled 'Fill 'Er Up With Dictators' :

But as oil has moved to $60 to $70 a barrel, it has fostered a counterwave — a wave of authoritarian leaders who are not only able to ensconce themselves in power because of huge oil profits but also to use their oil wealth to poison the global system — to get it to look the other way at genocide, or ignore an Iranian leader who says from one side of his mouth that the Holocaust is a myth and from the other that Iran would never dream of developing nuclear weapons, or to indulge a buffoon like Chávez, who uses Venezuela’s oil riches to try to sway democratic elections in Latin America and promote an economic populism that will eventually lead his country into a ditch.

But Mr. Friedman is a bit self-contradictory on which outcome he wants, as evidenced across his New York Times columns.

Over here, he says :

In short, the best tool we have for curbing Iran’s influence is not containment or engagement, but getting the price of oil down

And here, he says :

So here’s my prediction: You tell me the price of oil, and I’ll tell you what kind of Russia you’ll have. If the price stays at $60 a barrel, it’s going to be more like Venezuela, because its leaders will have plenty of money to indulge their worst instincts, with too few checks and balances. If the price falls to $30, it will be more like Norway. If the price falls to $15 a barrel, it could become more like America

Yet over here he says :

Either tax gasoline by another 50 cents to $1 a gallon at the pump, or set a $50 floor price per barrel of oil sold in America. Once energy entrepreneurs know they will never again be undercut by cheap oil, you’ll see an explosion of innovation in alternatives.

As well as over here :

And by not setting a hard floor price for oil to promote alternative energy, we are only helping to subsidize bad governance by Arab leaders toward their people and bad behavior by Americans toward the climate.

All of these articles were written within a 4-month period in early 2007.  Both philosophies are true by themselves, but they are mutually exclusive.  Mr. Friedman, what do you want?  Higher oil prices or lower oil prices?  Such confusion indicates how the debate about energy costs and technology is often high on rhetoric and low on analysis. 

Much worse, however, is the fashionable scaremongering that the financial media uses to fill up their schedule, amplified by a general public that gets suckered into groupthink.  To separate the whining from the reality, I apply the following simple test to verify whether people are actually being pinched by high oil prices or not.  If a large portion of average Americans have made arrangements to carpool to work (as was common in the 1970s), then oil prices are high.  Absent the willingness to make this adjustment, their whining about gasoline is not a reflection of actual hardship.  This enables us to declare that oil prices are not approaching crisis levels until most 10-mile-plus commuters are carpooling, that too in groups of three, rather than just two.  Coordinating of carpools is thus the minimum test of whether oil prices are actually causing any significant changes in behavior. 

Fortunately, $100 oil, a price that was considered a harbinger of doom as recently as 2007, is now not even enough to induce carpooling in 2011.  This quiet development is remarkably unnoticed, and conceals the substantial economic progress that has occurred.   

Economic Adaptations :

Trade Deficit The following chart from Calculated Risk (click to enlarge) shows the US trade deficit split between oil and non-oil imports.  This chart is not indexed as a percentage of GDP, but if it were, we would see that oil imports at $100/barrel today are not much higher of a percentage of GDP than in 1998, when oil was just $20/barrel.  In fact, the US produces much more economic output per barrel of oil compared to 1998.  We can thus see that unlike in 1974 when the US economy has much less demand elasticity for oil, today the ability of the economy to adjust oil consumption more quickly in reaction to higher prices makes the bar to experience an 'oil shock' much harder to clear.  US oil imports will never again attain the same percentage of GDP as was briefly seen in 2008. 

World Oil Consumption Per Capita-Downey-Oil 101 Of even more importance is the amazingly consistent per capita consumption of oil since 1982, which has remained at exactly 4.6 barrels/person despite a tripling real GDP per capita during the same period (chart by Morgan Downey).  This immediately deflates the claim that the looming economic growth of China and India will greatly increase oil consumption, since the massive growth from 1982 to 2011 did not manage to do this.  At this point, annual oil consumption, currently at around 32 billion barrels, only rises at the rate of population growth - about 1% a year. 

This leads me to make a declaration.  32 billion barrels at around $100/barrel is $3.2 Trillion in annual consumption.  This is currently less than 5% of nominal world GDP.  I hereby declare that :

Oil consumption worldwide will never exceed $4 Trillion/year, no matter how much inflation, political turmoil, or economic growth there is.  Thus, 'Peak Oil Consumption' happens long before 'Peak Oil Supply' ever could. 

This would mean that oil would gradually shrink as a percentage of world GDP, just as it has shrunk as a percentage of US GDP since 1982.  Even when world GDP is $150 Trillion, oil consumption will still be under $4 Trillion a year, and thus a very small percentage of the economy.  Mark my words, and proceed further to read about how I can predict this with confidence.   

The Carnival of Creative Destruction :

There are at least seven technologies that are advancing to reduce oil demand by varying degrees, many of which have been written about separately here at The Futurist : 

1) Natural Gas : Technologies that aid the discovery of natural gas have advanced at great speed, and supplies have skyrocketed to a level that exceeds anything humanity could consume in the next few decades.  The US alone has enough natural gas to more than offset all oil consumption, and the price of natural gas is currently on par with $50 oil. 

2) Efficiency gains : From innovations in engine design, airplane wing shape, reflective windows, and lighter nanomaterials, efficiency is advancing rapidly, to the extent that economic growth no longer increases oil consumption per capita, as described earlier.  There are many options available to consumers seeking 40 mpg or higher without sacrificing too much power or size, and I predicted back in early 2006 that in 2015, a 4-door family car with a 240 hp engine would deliver 60 mpg (or equivalent) yet still cost no more than $35,000 in 2015 dollars.  People scoffed at that prediction then, but now it seems quite safe.   

3) Cellulose Ethanol and Algae Oil : Corn ethanol was never going to be suitable in cost or scale, but the infrastructure established by the corn ethanol industry makes the transition to more sophisticated forms of ethanol production easier.  But fuels from switchgrass and algae are much more cost-effective, and will be ramping up in 2012.  Solazyme is an algae oil company that went public recently, and already has a market capitalization of $1.5 Billion. 

4) Batteries : Most of the limitations of electric and hybrid vehicles stem from shortcomings in battery technology.  However, since batteries are improving at a rate that is beginning to exceed the traditional 5-8% per year, and companies such as Tesla are able to lower the cost of their fully electric vehicles, the knee of the curve is near. 

5) Telepresence : Telepresence, while expensive today, will drop in price under the Impact of Computing and displace a substantial portion of business air travel, as described in detail here.  By 2015, geographically dispersed colleagues will seem to be closer to each other, despite meeting in person less often than they did in 2008.   

6) Wind Power : Wind Power already generates almost 3% of global electricity consumption, and is growing quickly.  When combined with battery advances that improve the range and power of electric and plug-in hybrid vehicles, we get two simultaneous disruptions - oil being displaced not just by electriciy, but by wind electricity.    

7) Solar Power : This source today generates the least power among those listed here.  But it is the fastest growing of the group with multiple technologies advancing at once, and with decades of steady price declines finally reaching competitive pricepoints.  It also has many structural advantages, most notably the fact that it be deployed to land that is currently unused and inhospitable.  Many of the countries with the fastest growth in energy consumption are also those with the greatest solar intensity. 

Plus, these are just the technologies that displace oil demand.  There are also technologies that increase oil supply, such as supercomputing-assisted oil discovery and new drilling techniques.  Supply-increasing technologies work to reduce oil prices and while they possibly slow down oil demand displacement, they too work to weaken petrotyranny. 

The problem in any discussion of these technologies is that the debate centers around an 'all or none' simplicity of whether the alternative can replace all oil demand, or none at all.  That is an unnuanced exchange that fails to comprehend that each technology only has to replace 10% of oil demand.  Natural gas can replace 10%, ethanol another 10%, efficiency gains another 10%, wind + solar another 10%, and so on.  Thus, if oil consumption as a percentage of world GDP is lower in a decade than it is today, that itself is a huge victory.  It hardly matters which technology advances faster than the others (in 2007, natural gas did not appear as though it would take the lead that it enjoys today), what matters is that all are advancing, and that many of these technologies are highly complementary to each other.     

What is also overlooked is how quickly the pressure to shift to alternatives grows as oil becomes more expensive.  If, say, cellulose ethanol is cost-effective with oil at $70, then oil at $80 causes a modest $10 dollar differential in favor of cellulose.  If oil is $120, then this differential is now $50, or five times more.  Such a delta causes much greater investment and urgency to ramp up research and production in cellulose ethanol.  Thus, each increment in oil price creates a much larger zone of profitability for any alternative. 

The Cost of Petrotyranny :

Map01_1024 This map of nations scaled in proportion to their petroleum reserves (click to enlarge) replaces thousands of words.  Some contend that the easy money derived from exporting oil leads to inevitable corruption and the financing of evil well beyond the borders of petro-states, while others lament the misfortune that this major energy source is concentrated in a very small area containing under 2% of the world's population.  Other sources of energy, such as natural gas, are much more evenly distributed across the planet, and this supply chain disadvantage is starting to work against oil.   

However, as we saw in the 2008 article, many of these regimes are dancing on a very narrow beam only as wide as the span between oil of $70 and $120/barrel.  While a price below $70 would be fatal to the current operations of Iran, Venezuela, and Russia, even a high price leads to a shrinkage in export revenue, as domestic consumption rises to reduce export units to a greater degree than can be offset by a price rise.  Furthermore, higher prices accelerate the advance of the previously mentioned technologies.  For the first time, we can now estimate how long oil can still hold such an exalted economic status. 

Quantifying the Remaining Petro-Yoke :

For the first time, we can make the analysis of both technological and political pressure exerted by a particular oil price more precise.   We can now quantify the rate of technological demand destruction, and predict the actual number of years before oil ceases to have any ability to cause economic recessions, and regimes like Iran, Venezuela, and Russia no longer can subsist on oil exports to the same degree.  This brings me to the second declaration of this article :

From the start of 2011, measure the dollar-years of area enclosed by a chart of the price of oil above $70.  There are only 200 such dollar-years remaining for the current world petro-order.  We can call this the 'Law of Finite Petrotyranny'. 

Allow me to elaborate. 

Through some proprietary analysis, I have calculated that the remaining lifetime of oil's economic importance as follows :

  • From the start of 2011, take the average price of West Texas Intermediate (WTI), Brent, or NYMEX oil, and subtract $70 from that, each year. 
  • Take the number accumulated, and designate that as 'X' dollar-years.
  • As soon as X equals to 200 dollar-years, then oil will not just fall below $70, but will never again be a large enough portion of world GDP to have a significant macroeconomic impact. 
     

Oil Price You can plug in your own numbers to estimate the year in which oil will cease to exert such power.  For example, if you believe that oil will average $120, which is $50 above the $70 floor, then the X points are expended at a rate of $50/year, meaning depletion at the end of 2014.  If oil instead averages just $100, then the X points are expended at $30/year, meaning it will take 6.67 years, or until late 2017, to consume them.  Points are only depleted when oil is above $70, but are not restored if oil is below $70 (as research projects may be discontinued or postponed, but work already done is not erased).  For those who (wrongly) insist that oil will soon be $170, the good news for them is that in such an event they will see the X points depleted in just two short years.  The graph provides 3 scenarios, of oil averaging $120, $110, and $100, and indicating in which year such a price trend would exhaust the 200 X points from points A, B, and C, which is the area of each of the three rectangles.  In reality, price fluctuations will cause variations in the rate of X point depletion, but you get the idea. 

Keep in mind the Law of Finite Petrotyranny, and on that basis, welcome any increase in oil prices as the hastening force of oil replacement that it is.  My personal opinion?  We average about $100/barrel, causing depletion of the X points in 2017 (scenario 'C' in green). 

Conclusion :

So what happens after the Law of Finite Petrotyranny manifests itself?  Let me pre-empt the strawmen that critics will erect, and state that oil will still be an important source of energy.  But most people will no longer care about the price of oil, much as the average person does not keep track of the price of natural gas or coal.  Oil will simply be a fuel no longer important enough to cause recessions or greatly alter consumer behavior through short-term spikes.  Many OPEC countries will see a great reduction in their power, and will no longer be able to placate their citizens through petro-handouts alone.  These countries would do well to act now and diversify their economies, phase in civil liberties while they can still do so incrementally, and prepare for a future of much lower leverage over their current customers.

So cheer oil prices higher so that the X points get frittered away quickly.  It will be fun. 

 

Related :

A Future Timeline for Energy

A Future Timeline for Automobiles 

July 01, 2011 in Accelerating Change, Core Articles, Economics, Energy, Technology | Permalink | Comments (76) | TrackBack (0)

Tweet This! |

Carbonara

Observers have been waiting for carbon nanotubes, buckyballs, and graphene to transform the world for quite some time, and the wait has been longer than they expected.  Enthusiasts for this new miracle material had all but vanished.  Is this warranted?  Where does the state of innovation in various forms of carbon, that could yield ultra-strong, ultra-light materials and superfast computing really stand? 

CNET had an article just last month about the multiple disruptions that the various allotopes of carbon are about to make.  That is quite exciting, except that CNET also had a similar article in 2003.  Similarly, Ray Kurzweil extolled carbon nanotubes as a successor to silicon quite heavily in 1999, but not quite as much now, even though that supposed transition would be much closer to the present.  This does not mean that Kurzweil's estimation was in error, but rather that the technology was unexpectedly stagnant during the early 2000s.  So let us examine why there was such an interruption, and whether progress has since resumed.    

Graphene I wrote in 2009 about how we had undergone a multi-year nanotech winter, and how we were emerging from it in 2009.  As anticipated, carbon nanotubes are now finally lowering in price, and being produced at a scale that could start making an impact.  Sure enough, activity began to stir right as I predicted, and the 2010 Nobel Prize in physics has been awarded to research in graphene.  Just like CNETs article, Wired also has an article about the diverse applications that graphene could revolutionize.  Combining the two articles, we can summarize the core possibilities of carbon allotopes as follows :

Ultra-dense computing and storage : Graphene transistors smaller than 1 nanometer have been demonstrated.  Carbon allotopes could keep the exponential doubling of both computing and storage capacity going well into the 2030s. 

Carbon Fiber Vehicles : This lightweight, ultrastrong material can save vast amounts of fuel by reducing the weight of cars and aeroplanes.  While premium products such as the $6000 Trek Madone bicycles are already made from carbon fiber, greater volume is reducing prices and will soon make the average car much ligher than it is today, increasing fuel efficiency and reducing traffic fatalities. 

Energy Storage : Natural Gas is not only much cheaper than oil per unit of energy (oil would have to drop to about $30 to match current NG prices), but the supply of NG is more evenly distributed across the world than the oil supply.  The US alone has an enormous reserve of natural gas that could ensure total energy independence.  The main problem with NG is storage, which is the primary reason oil displacement is not happening rapidly.  But microporous carbon can effectively act as a sponge for natural gas, enabling safe and easy transport.  This could potentially change the entire energy map.

There are other applications beyond these core three, but suffice it to say, the allotopes of carbon can perform a greater variety of functions than any other material available to us today.  Watch for indications of carbon allotopes popping up in the strangest of places, and know that each emergence drives the cost down ever lower. 

Related :

Nanotechnology : Bubble, Bust,.....Boom?

Milli, Micro, Nano, Pico

November 01, 2010 in Accelerating Change, Nanotechnology, Science, Technology | Permalink | Comments (3)

Tweet This! |

The TechnoSponge

After years of thinking about this, I have come up with a term that can describe the thoughts I have had about the new, 'good' type of deflation that is evading the notice of almost all of the top economists in the world today.  This changes many of the most fundamental assumptions about economics, even as most economic thought is far behind the curve. 

First, let us review some events that transpired over the last 2 years.  To stave off the prospect of a deflationary spiral that could lead to a depression, the major governments of the world followed 20th-century textbook economics, and injected colossal amounts of liquidity into the financial system.  In the US, not only was the Fed Funds rate lowered to nearly zero (for now 18 months and counting), but an additional $1 Trillion was injected in. 

However, now that a depression has been averted, and the recession has ended, we were supposed to experience inflation even amidst high unemployment, just like we did in the 1970s, to minimize debt burdens.  But alas, there is still no inflation, despite a yield curve with more than 3% steepness, and a near-0% FF rate for so long.  How could this be?  What is absorbing all the liquidity?   

In The Impact of Computing, I discussed how 1.5% of World GDP today comprises of products where the same functionality can be purchased for a price that halves every 18 months.  'Moore's Law' applies to semiconductors, but storage, software, and some biotech are also on a similar exponential curve.  This force makes productivity gains higher, and inflation lower, than traditional 20th century economics would anticipate.  Furthermore, the second derivative is also increasing - the rate of productivity gains itself is accelerating.  1.5% of World GDP may be small, but what about when this percentage grows to 3% of World GDP?  5%?  We may only be a decade away from this, and the impact of this technological deflation will be more obvious. 

Most high-tech companies have a business model that incorporates a sort of 'bizarro force' that is completely the opposite of what old-economy companies operate under : The price of the products sold by a high-tech company decreases over time.  Any other company will manage inventory, pricing, and forecasts under an assumption of inflationary price increases, but a technology company exists under the reality that all inventory depreciates very quickly (at over 10% per quarter in many cases), and that price drops will shrink revenues unless unit sales rise enough to offset it (and assuming that enough unit inventory was even produced).  This results in the constant pressure to create new and improved products every few months just to occupy prime price points, without which revenues would plunge within just a year.  Yet, high-tech companies have built hugely profitable businesses around these peculiar challenges, and at least 8 such US companies have market capitalizations over $100 Billion.  6 of those 8 are headquartered in Silicon Valley. 

Now, here is the point to ponder : We have never had a significant technology sector while also facing the fears (warranted or otherwise) of high inflation.  When high inflation vanished in 1982, the technology sector was too tiny to be considered a significant contributor to macroeconomic statistics.  In an environment of high inflation combined with a large technology industry, however, major consumer retail pricepoints, such as $99.99 or $199.99, become more affordable.  The same also applies to enterprise-class customers.  Thus, demand creeps upwards even as cost to produce the products goes down on the same Impact of Computing curve.  This allows a technology company the ability to postpone price drops and expand margins, or to sell more volume at the same nominal dollar price.  Hence, higher inflation causes the revenues and/or margins of technology companies to rise, which means their earnings-per-share certainly surges.

So what we are seeing is the gigantic amount of liquidity created by the Federal Reserve is instead cycling through technology companies and increasing their earnings.  The products they sell, in turn, increase productivity and promptly push inflation back down.  Every uptick in inflation merely guarantees its own pushback, and the 1.5% of GDP that mops up all the liquidity and creates this form of 'good' deflation can be termed as the 'Technosponge'.  So how much liquidity can the Technosponge absorb before saturation? 

At this point, if the US prints another $1 Trillion, that will still merely halt deflation, and there will be no hint of inflation at all.  It would take a full $2 Trillion to saturate the techno-sponge, and temporarily push consumer inflation to even the less-than-terrifying level of 4% while also generating substantial jumps in productivity and tech company earnings.  In fact, the demographics of the US, with baby boomers reaching their geriatric years, are highly deflationary (and this is the bad type of deflation), so the US would have to print another $1 Trillion every year for the next 10 years just to offset demographic deflation, and keep the Technosponge saturated. 

A Technosponge that is 1.5% of GDP might be keeping CPI inflation at under 2%, but when the techno-sponge is 3% of GDP, even trillions of dollars of liquidity won't halt deflation.  Deflation may become normal, even as living standards and productivity rise at ever-increasing rates.  The people who will suffer are holders of debt, particularly mortgage debt.  Inflating away debt will no longer be a tool available to rescue people (and governments) from their errors.  The biggest beneficiaries will be technology companies, and those who are tied to them. 

But to keep prosperity rising, productivity has to rise at the maximum possible rate.  This requires the Technosponge to be kept full at all times - the 'new normal'.  Thus, the printing press has to start on the first $1 Trillion now, and printing has to continue until we see inflation.  Economists will be surprised at how much can be printed without seeing any inflation, and will not be able to draw the connection about why the printed money is boosting productivity. 

Related :

The Impact of Computing

Timing the Singularity

 

 

July 01, 2010 in Accelerating Change, Computing, Economics, Technology, The Singularity | Permalink | Comments (104)

Tweet This! |

The Winds of War, The Sands of Time, v2.0

300pxww2_iwo_jima_flag_raising_2This is a version 2.0 of a legendary article written here back on March 19, 2006, noticed and linked by Hugh Hewitt, which led to The Futurist getting on the blogosphere map for the first time.  Less than four years have elapsed since the original publication, but the landscape of global warfare has changed substantially over this time, warranting an update to the article. 

In the mere 44 months since the original article was written, what seemed impossible has become a reality.  The US now has an upper hand against terrorist groups like Al-Qaeda, despite the seemingly impossible task of fighting suicidal terrorists.  As regular readers of The Futurist are aware, I issued a prediction in May of 2006, during the darkest days of the Iraq War, that not only would the US win, but that the year of victory would be precisely in 2008.  As events unfolded, that prediction turned out to be precisely correct.  As readers continue to ask how I was able to make such a prediction against seemingly impossible odds, I claim that it is not very difficult, once you understand the necessary conditions of war and peace within the human mind. 

Given the massive media coverage of the minutia of the Iraq War, and the fashionable fad of being opposed to it, one could be led to think that this is one of the most major wars ever fought.  Therein lies the proof that we are actually living in the most peaceful time ever in human history. 

Just a few decades ago, wars and genocides killing upwards of a million people were commonplace, with more than one often underway at once.  Remember these?

Second Congo War (1998-2002) : 3.6 million deaths

Iran-Iraq War (1980-88) : 1.5 million deaths

Soviet Invasion of Afghanistan (1979-89) : 1 million deaths

Khmer Rouge (1975-79) : 1.7 million deaths from genocide

Bangladesh Liberation War (1971) : 1.5 million deaths from genocide

Vietnam War (1957-75) : 2.4 million deaths

Korean War (1950-53) : 3 million deaths

This list is by no means complete, as wars killing fewer than one million people are not even listed.  At least 30 other wars killed over 20,000 people each, between 1945 and 1989.

If we go further back to the period from 1900-1945, we can see that multiple wars were being simultaneously fought across the world.  Going further back still, the 19th century had virtually no period without at least two major wars being fought.

We can thus conclude that by historical standards, the current Iraq War was tiny, and can barely be found on the list of historical death tolls.  That it got so much attention merely indicates how little warfare is going on in the world, and how ignorant of historical realities most people are. 

Why have so many countries quitely adapted to peaceful coexistence?  Why is a war between Britain and France, or Russia and Germany, or the US and Japan, nearly impossible today?  Why are we not seeing a year like 1979, where the entire continent of Asia threatened to fly apart due to three major events happening at once (Iranian Revolution, Soviet Invasion of Afghanistan, Chinese invasion of VietNam)? 

300pxusafb2spirit750pix We can start with the observation that never have two democratic countries, with per-capita GDPs greater than $10,000/year on a PPP basis, gone to war with each other.  The decline in warfare in Europe and Asia corelates closely with multiple countries meeting these two conditions over the last few decades, and this can continue as more countries graduate to this standard of freedom and wealth.  The chain of logic is as follows :

1) Nations with elected governments and free-market systems tend to be the overwhelming majority of countries that achieve per-capita incomes greater than $10,000/year.  Only a few petro-tyrannies are the exception to this rule. 

2) A nation with high per-capita income tends to conduct extensive trade with other nations of high prosperity, resulting in the ever-deepening integration of these economies with each other.  A war would disrupt the economies of both participants as well as those of neutral trading partners.   Since the citizens of these nations would suffer financially from such a war, it is not considered by elected officials. 

3) As more of the world's people gain a vested interest in the stability and health of the interlocking global economic system, fewer and fewer countries will consider international warfare as anything other than a lose-lose proposition.

4) More nations can experience their citizenry moving up Maslow's Hierarchy of Needs, allowing knowledge-based industries thrive, and thus making international trade continuously easier and more extensive. 

5) Since economic growth is continuously accelerating, many countries have crossed the $10,000/yr barrier in just the last 20 years, and so the reduction in warfare after 1991 years has been drastic even if there was little apparent reduction over the 1900-1991 period. 

This explains the dramatic decline in war deaths across Europe, East Asia, and Latin America over the last few decades.  Thomas Friedman has a similar theory, called the Dell Theory of Conflict Prevention, wherein no two countries linked by a major supply chain/trade network (such as that of a major corporation like Dell Computer), have ever gone to war with each other, as the cost of losing the presence of major industries through war is prohibitive to both parties.  If this is the case, then the combinations of countries that could go to war with each other continues to drop quickly. 

To predict the future risk of major wars, we can begin by assessing the state of some of the largest and/or riskiest countries in the world.  Success at achieving democracy and a per-capita GDP greater that $10,000/yr are highlighted in green.  We can also throw in the UN Human Development Index, which is a composite of these two factors, and track the rate of progress of the HDI over the last 30 years.  In general, countries with scores greater than 0.850, consistent with near-universal access to consumer-class amenities, have met the aforementioned requirements of prosperity and democracy.  There are many more countries with a score greater than 0.850 today than there were in 1975.

Let's see how some select countries stack up.

War  

China : The per-capita income is rapidly closing in on the $10,000/yr threshold, but democracy is a distant dream.  I have stated that China will see a sharp economic slowdown in the next 10 years unless they permit more personal freedoms, and thus nurture entrepreneurship.  Technological forces will continue to pressure the Chinese Communist Party, and if this transition is moderately painless, the ripple effects will be seen in most of the other communist or autocratic states that China supports, and will move the world strongly towards greater peace and freedom.  The single biggest question for the world is whether China's transition happens without major shocks or bloodshed.  I am optimistic, as I believe the CCP is more interested in economic gain than clinging to an ideology and one-party rule, which is a sharp contrast from the Mao era where 40 million people died over ideology-driven economic schemes.  Cautiously optimistic. 

India : A secular democracy has existed for a long time, but economic growth lagged far behind.  Now, India is catching up, and will soon be a bulwark for democracy and stability for the whole world.  Some of the most troubled countries in the world, from Burma to Afghanistan, border India and could transition to stability and freedom under India's sphere of influence.  India is only now realizing how much the world will depend on it.  Optimistic.

Russia : A lack of progress in the HDI is a total failure, enabling many countries to overtake Russia over the last 15 years.  Putin's return to dictatorial rule is a further regression in Russia's progress.  Hopefully, energy and technology industries can help Russia increase its population growth rate, and up its HDI.  Cautiously optimistic.

Indonesia : With more Muslims than the entire Middle East put together, Indonesia took a large step towards democracy in 1999 (improving its HDI score), and is doing moderately well economically.  Economic growth needs to accelerate in order to cross $10,000/yr per capita by 2020.  Cautiously optimistic.

Pakistan : My detailed Pakistan analysis is here.  The divergence between the paths of India and Pakistan has been recognized by the US, and Pakistan, with over 50 nuclear warheads, is also where Osama bin Laden and thousands of other terrorists are currently hiding.  Any 'day of infamy' that the US encounters will inevitably be traced to individuals operating in Pakistan, which has regressed from democracy to dictatorship, and is teetering on the edge of religious fundamentalism.  The economy is growing quickly, however, and this is the only hope of averting a disaster.  Pakistan will continue to struggle between emulating the economic progress of India against descending into the dysfunction of Afghanistan.  Pessimistic.

Iraq : Although Iraq is not a large country, its importance to the world is disproportionately significant.  Bordering so many other non-democratic nations, our hard-fought victory in Iraq now places great pressure on all remaining Arab states.  The destiny of the US is also interwined with Iraq, as the outcome of the current War in Iraq will determine the ability of America to take any other action, against any other nation, in the future.  Optimistic.

Iran : Many would be surprised to learn that Iran is actually not all that poor, and the Iranian people have enough to lose that they are not keen on a large war against a US military that could dispose of Iran's military just as quickly as they did Saddam's.  However, the autocratic regime that keeps the Iranian people suppressed has brutally quashed democratic movements, most recently in the summer of 2009.  The secret to turning Iran into a democracy is its neighbor, Iraq.  If Iraq can succeed, the pressure on Iran exerted by Internet access and globalization next door will be immense.  This will continue to nibble at the edges of Iranian society, and the regime will collapse before 2015 even without a US invasion.  If Iran's leadership insists on a confrontation over their nuclear program, the regime will collapse even sooner.  Cautiously optimistic. 

So Iraq really is a keystone state, and the struggle to prevail over the forces that would derail democracy has major repurcussions for many nations.  The US, and the world, could nothave afforded for the US mission in Iraq to fail.  But after the success in Iraq, all remaining roads to disastrous tragedy lead to Pakistan.  The country in which the leadership of Al-Qaeda resides is the same country where the most prominent nuclear scientist was caught selling nuclear secrets on the black market.  This is simply the most frightening combination of circumstances that exists in the world today, far more troubling than anything directly attributable to Iran or North Korea. 

But smaller-scale terrorism is nothing new.  It just was not taken as seriously back when nations were fighting each other in much larger conflicts. The 1983 Beirut bombing that killed 241 Americans did not dominate the news for more than two weeks, as it was during the far more serious Cold War.  Today, the absence of wars between nations brings terrorism into the spotlight that it could not have previously secured. 

Wars against terrorism have been a paradigm shift, because where a war like World War II involved symmetrical warfare between declared armies, the War on Terror involves asymmetrical warfare in both directions.  Neither party has yet gained a full understanding of the power it has over the other. 

Flag_1A few terrorists with a small budget can kill thousands of innocents without confronting a military force. Guerilla warfare can tie down the mighty US military for years until the public grows weary of the stalemate, even while the US cannot permit itself to use more than a tiny fraction of its power in retaliation.  Developed nations spend vastly more money on political and media activites centered around the mere discussion of terrorism than the terrorists themselves need to finance a major attack on these nations. 

At the same time, pervasively spreading Internet access, satellite television, and consumer brands continue to disseminate globalization and lure the attention of young people in terrorist states.  We saw exactly this in Iran in the summer of 2009, where state-backed murders of civilian protesters were videotaped by cameraphone, and immediately posted online for the world to see.  This unrelentingly and irreversibly erodes the fabric of pre-modern fanaticism at almost no cost to the US and other free nations.  The efforts by fascist regimes to obstruct the mists of the information ethersphere from entering their societies is so futile as to be comical, and the Iranian regime may not survive the next uprising, when even more Iranians will have camera phones handy.  Bidirectional asymmetry is the new nature of war, and the side that learns how to harness the asymmetrical advantage it has over the other is the side that will win.

It is the wage of prosperous, happy societies to be envied, hated, and forced to withstand threats that they cannot reciprocate back onto the enemy.  The US has overcome foes as formidable as the Axis Powers and the Soviet Union, yet we managed to adapt and gain the upper hand against a pre-modern, unprofessional band of deviants that does not even have the resources of a small nation and has not invented a single technology.  The War on Terror was thus ultimately not with the terrorists, but with ourselves - our complacency, short attention spans, and propensity for fashionable ignorance over the lessons of history. 

But 44 months turned out to be a very long time, during which we went from a highly uncertain position in the War on Terror to one of distinct advantage.  Whether we continue to maintain the upper hand that we currently have, or become too complacent and let the terrorists kill a million of us in a day remains to be seen. 

November 21, 2009 in Accelerating Change, Core Articles, Economics, Political Debate, Politics | Permalink | Comments (73) | TrackBack (0)

Tweet This! |

Timing the Singularity

(See the 10-yr update here).  The Singularity.  The event when the rate of technological change becomes human-surpassing, just as the advent of human civilization a few millennia ago surpassed the comprehension of non-human creatures.  So when will this event happen?

There is a great deal of speculation on the 'what' of the Singularity, whether it will create a utopia for humans, cause the extinction of humans, or some outcome in between.  Versions of optimism (Star Trek) and pessimism (The Matrix, Terminator) all become fashionable at some point.  No one can predict this reliably, because the very definition of the singularity itself precludes such prediction.  Given the accelerating nature of technological change, it is just as hard to predict the world of 2050 from 2009, as it would have been to predict 2009 from, say, 1200 AD.  So our topic today is not going to be about the 'what', but rather the 'when' of the Singularity. 

Let us take a few independent methods to arrive at estimations on the timing of the Singularity.

1) Ray Kurzweil has constructed this logarithmic chart that combines 15 unrelated lists of key historic events since the Big Bang 15 billion years ago.  The exact selection of events is less important than the undeniable fact that the intervals between such independently selected events are shrinking exponentially.  This, of course, means that the next several major events will occur within single human lifetimes. 

772px-ParadigmShiftsFrr15Events_svg

Kurzweil wrote with great confidence, in 2005, that the Singularity would arrive in 2045.  One thing I find about Kurzweil is that he usually predicts the nature of an event very accurately, but overestimates the rate of progress by 50%.  Part of this is because he insists that computer power per dollar doubles every year, when it actually doubles every 18 months, which results in every other date he predicts to be distorted as a downstream byproduct of this figure.  Another part of this is that Kurzweil, born in 1948, is famously taking extreme measures to extend his lifespan, and quite possibly may have an expectation of living until 100 but not necessarily beyond that.  A Singularity in 2045 would be before his century mark, but herein lies a lesson for us all.  Those who have a positive expectation of what the Singularity will bring tend to have a subconscious bias towards estimating it to happen within their expected lifetimes.  We have to be watchful enough to not let this bias influence us.  So when Kurzweil says that the Singularity will be 40 years from 2005, we can apply the discount to estimate that it will be 60 years from 2005, or in 2065. 

2) John Smart is a brilliant futurist with a distinctly different view on accelerating change from Ray Kurzweil, but he has produced very little visible new content in the last 5 years.  In 2003, he predicted the Singularity for 2060, +/- 20 years.  Others like Hans Moravec and Vernor Vinge also have declared predictions at points in the mid/late 21st century. 

3) Ever since the start of the fictional Star Trek franchise in 1966, they have made a number of predictions about the decades since, with impressive accuracy.  In Star Trek canon, humanity experiences a major acceleration of progress starting from 2063, upon first contact with an extraterrestrial civilization.  While my views on first contact are somewhat different from the Star Trek prediction, it is interesting to note that their version of a 'Singularity' happened to occur in 2063 (as per the 1996 film Star Trek : First Contact). 

4) Now for my own methodology.  We shall first take a look at novel from 1863 by Jules Verne, titled "Paris in the 20th Century".  Set about a century in the future from Verne's perspective, the novel predicts innovations such as air conditioning, automobiles, helicopters, fax machines, and skyscrapers in detail.  Such accuracy makes Jules Verne the greatest futurist of the 19th century, but notice how his predictions involve innovations that occured within 120 years of writing.  Verne did not predict exponential growth in computation, genomics, artificial intelligence, cellular phones, and other innovations that emerged more than 120 years after 1863.  Thus, Jules Verne was up against a 'prediction wall' of 120 years, which was much longer than a human lifespan in the 19th century. 

But now, the wall is closer.  In the 3.5 years since the inception of The Futurist, I have consistently noticed a 'prediction wall' on all long-term forecasts, that makes it very difficult to make specific predictions beyond 2040 or so.  In contrast, it was not very hard to predict the state of technology in 1930 from the year 1900, just 30 years prior.  Despite all the inventions between 1900 and 1930, the diffusion rate was very slow, and it took well over 30 years for many innovations to affect the majority of the population.  The diffusion rate of innovation is much faster today, and the pervasive Impact of Computing is impossible to ignore.  This 'event horizon' that we now see does not mean the Singularity will be as soon as 2040, as the final couple of decades before the Singularity may still be too fast to make predictions about until we get much closer.  But the compression of such a wall/horizon from 120 years in Jules Verne's time to 30 years today gives us some idea of the second derivative in the rate of change, and many other top futurists have observed the same approaching phenomenon.  By 2030, the prediction wall may thus be only 15 years away.  By the time of the Singularity, the wall would be almost immediately ahead from a human perspective. 

So we can return to the Impact of Computing as a driver of the 21st century economy.  In the article, I have written about how about $700 Billion per year as of 2008, which is 1.5% of World GDP, comprises of products that improve at an average of 59% a year per dollar spent.  Moore's Law is a subset of this, but this cost deflation applies to storage, software, biotechnology, and a few other industries as well. 

If products tied to the Impact of Computing are 1.5% of the global economy today, what happens when they are 3%? 5%?  Perhaps we would reach a Singularity when such products are 50% of the global economy, because from that point forward, the other 50% would very quickly diminish into a tiny percentage of the economy, particularly if that 50% was occupied by human-surpassing artificial intelligence.   

Singularity We can thus calculate a range of dates by when products tied to the Impact of Computing become more than half of the world economy.  In the table, the columns signify whether one assumes that 1%, 1.5%, or 2% of the world economy is currently tied, and the rows signify the rate at which this percentage share of the economy is increasing, whether 6%, 7%, or 8%.  This range is derived from the fact that the semiconductor industry has a 12-14%% nominal growth trend, while nominal world GDP grows at 6-7% (some of which is inflation).  Another way of reading the table is that if you consider the Impact of Computing to affect 1% of World GDP, but that share grows by 8% a year, then that 1% will cross the 50% threshold in 2059.  Note how a substantial downward revision in the assumptions moves the date outward only by years, rather than centuries or even decades. 

We see these parameters deliver a series of years, with the median values arriving at around the same dates as aforementioned estimates.  Taking all of these points in combination, we can predict the timing of the Singularity.  I hereby predict that the Technological Singularity will occur in :

 

2060-65 ± 10 years

 

Hence, the earliest that it can occur is 2050 (hence the URL of this site), and the latest is 2075, with the highest probability of occurrance in 2060-65.  There is virtually no statistical probability that it can occur outside of the 2050-75 range. 

So now we know the 'when' of the Singularity.  We just don't know the 'what', nor can we with any certainty. 

Related :

The Impact of Computing

Are You Acceleration Aware?

Pre-Singularity Abundance Milestones

SETI and the Singularity

August 20, 2009 in Accelerating Change, Core Articles, The Singularity | Permalink | Comments (69)

Tweet This! |

Video Conferencing : A Cascade of Disruptions

Prod_large_photo0900aecd80553a7e Almost 3 years ago, in October of 2006, I first wrote about Cisco's Telepresence technology which had just launched at that time, and how video conferencing that was virtually indistinguishable from reality was eventually going to sharply increase the productivity and living standards of corporate employees (image : Cisco). 

At that time, Cisco and Hewlett Packard both launched full-room systems that cost over $300,000 per room.  Since then, there has not been any price drop from either company, which is unheard of for a system with components subject to Moore's Law rates of price declines.  This indicates that market demand has been high enough for both Cisco and HP to sustain pricing power and improve margins.  Smaller companies like LifeSIze, Polycom, and Teleris have lower-end solutions for as little as $10,000, that have also been selling briskly, but have not yet dragged down the Cisco/HP price tier.

This article in the San Jose Mercury News indicates what sort of savings these two corporations have earned by use of their own systems :

In a trend that could transform the way companies do business, Cisco Systems has slashed its annual travel budget by two-thirds — from $750 million to $240 million — by using similar conferencing technology to replace air travel and hotel bills for its vast workforce.

Likewise, Hewlett-Packard says it sliced 30 percent of its travel expenses from 2007 to 2008 — and expects even better results for 2009 — in large part because of its video conference technology.

If Cisco can chop its travel expenses by two-thirds, and save $500 million per year (which increases their annual profit by a not-insignificant 6-10%), then every other large corporation can save a similar magnitude of money.  For corporations with very narrow operating margins, the savings could have a dramatic impact on operating earnings, and therefore stock price.  The Fortune 500 alone (excluding airline and hotel companies) could collectively save $100 billion per year, in a wave set to begin immediately if either Cisco or HP drops the price of their solution, which may happen in a matter of months.  We will soon see that for every $20 that corporations used to spend on air travel and hotels, they will instead be spending only $1 on videoconferencing expenses.  This is gigantic gain in enterprise productivity. 

Needless to say, high-margin airline revenue from flights between major business centers (such as San Francisco-Taipei or New York-London) will be slashed, and airlines will have to consolidate to fewer flights, making suitability for business travel even less flexible and losing even more passengers.  Hotels will have to consolidate, and taxis and restaurants in business hubs will suffer as well.  But these are merely the most obvious of disruptions.  What is even more interesting are the less obvious ripple effects that only manifest a few years later, which are :

1) Employee Time and Hassle : Anyone who has had to travel to another continent for a Mon-Fri workweek trip knows that the process of taking a taxi to the airport, waiting 2 hours at the airport, the flight itself, and the ride to the final destination consumes most of the weekends on either side of the trip.  Most senior executives log over 200,000 miles of flight per year.  This is a huge drag on personal time and quality of life.  Travel on weekdays consume productive time that the employer could benefit from, which for senior executives, could be worth thousands of dollars per hour.  Furthermore, in an era of superviruses, we have already seen SARS, bird flu, and swine flu as global pandemic threats within the last few years.  A reduction of business travel will slow down the rate at which such viruses can spread across the globe and make quarantines less inconvenient for business (although tourist travel and remaining business travel are still carriers of this). 

2) Real Estate Prices in Expensive Areas : Home prices in Manhattan and Silicon Valley are presently 4X or more higher than a home of the same square footage 80 miles away.  By 2015, the single-screen solution that Cisco sells for $80,000 today may cost as little as $2000, and those from LifeSize and others may be even cheaper, so hosting meetings with colleagues from a home office might be as easy as running a conference call.  A good portion of employees who have small children may find it possible to do their jobs in a manner than requires them to go to their corporate office only once or twice a week.  If even 20% of employees choose to flee the high-cost housing near their offices, the real estate prices in Manhattan and Silicon Valley will deflate significantly.  While this is bad news for owners of real-estate in such areas, it is excellent news for new entrants, who will see an increase in their purchasing power.  Best of all, working families may be able to afford to have children that they presently cannot finance. 

3) Passenger Aviation Technological Leap : Airlines and aircraft manufacturers have little recourse but to respond to these disruptions with innovations of their own, of which the only compelling possibility is to have each journey take far less time.  It is apparent that there has been little improvement in the speed of passenger aircraft in the last 40 years.  J. Storrs Hall at the Foresight Institute has an article up with a chart that shows the improvements and total flattening of the speed of passenger airline travel.  The cost of staying below Mach 1 vs. being above it are very different, as much as 3X, which accounts for the sudden halt in speed gains just below the speed of sound after the early 1960s.  However, the technologies of supersonic aircraft (which exist, of course, in military planes) are dropping in price, and it is possible that suborbital passenger flight could be available for the cost of a first-class ticket by 2025.  The Ansari X-prize contest and Space Ship Two have already demonstrated early incarnations of what could scale up to larger planes.  This will not reverse the video-conferencing trend, of course, but it will make the airlines more competitive for those interactions that have to be in person. 

So we are about to see a cascade of disruptions pulsate through the global economy.  While in 2009, you may have no choice but to take a 14-hour flight (each way) to Asia, in 2025, the similar situation may present you with a choice between handling the meeting with the videoconferencing system in your home office vs. taking a 2-hour suborbital flight to Asia. 

This, my friends, is progress. 

August 11, 2009 in Accelerating Change, Computing, Economics, Technology | Permalink | Comments (25) | TrackBack (0)

Tweet This! |

The Next Big Thing in Entertainment, A Half-Time Update

On April 1, 2006, I wrote a detailed article on the revolutionary changes that were to occur in the concept of home entertainment by 2012 (see Part I and Part II of the article).  Now, in 2009, half of the time within the six-year span between the original article and the prediction has elapsed.  Of course, given the exponential nature of progress, much more happens within the second half of any prediction horizon relative to the first half. 

The prediction issued in 2006 was:

Video Gaming (which will no longer be called this) will become a form of entertainment so widely and deeply partaken in that it will reduce the time spent on watching network television to half of what it is (in 2006), by 2012.

The basis of the prediction was detailed in various points from the original article, which in combination would lead to the outcome of the prediction.  The progress as of 2009 around these points is as follows :

1) Video game graphics continue to improve : Note the progress of graphics at 10-year intervals starting from 1976.  Projecting the same trend, 2012 will feature many games with graphics that rival that of CGI films, which itself can be charted by comparing Pixar's 'Toy Story' from 1995 to 'Up' from 2009.  See this demonstration from the 2009 game 'Heavy Rain', which arguably exceeds the graphical quality of many CGI films from the 1990s.   

The number of polygons per square inch on the screen is a technology that is closely tied to The Impact of Computing, and can only rise steadily.  The 'uncanny valley' is a hurdle that designers and animators will take a couple of years to overcome, but overcoming this barrier is inevitable as well. 

2) Flat-screen HDTVs reach commodity prices : This has already happened, and prices will continue to drop so that by 2012, 50-inch sets with high resolution will be under $1000.  A thin television is important, as it clears the room to allow more space for the movement of the player.  A large size and high resolution are equally important, in order to create an immersive visual experience. 

We are rapidly trending towards LED and Organic LED (OLED) technologies that will enable TVs to be less than one centimeter thick, with ultra-high resolution. 

3) Speech and motion recognition as control technologies : When the original article was written on April 1, 2006, the Nintendo Wii was not yet available in the market.  But as of June 2009, 50 million units of the Wii have sold, and many of these customers did not own any game console prior to the Wii. 

The traditional handheld controllers are very limited in this regard, despite being used by hundreds of millions of users for three decades.  If the interaction that a user can have with a game is more natural, the game becomes more immersive to the human senses.  See this demonstration from Microsoft for their 'Project Natal' interface technology, due for release in 2010. 

Furthermore, haptic technologies have made great strides, as seen in the demonstration videos over here.  Needless to say, the possibilities are vast. 

4) More people are migrating away from television, and towards games :  Television viewership is plummeting, particularly among the under-50 audience, as projected in the original 2006 article.  Fewer and fewer television programs of any quality are being produced, as creative talent continues to leak out of television network studios.  At the same time, World of Warcraft has 11 million subscribers, and as previously mentioned, the Wii has 50 million units in circulation. 

There are only so many hours of leisure available in a day, and Internet surfing, movies, and video games are all more compelling than the ever-declining quality of television offerings.  Children have already moved away from television, and the trend will creep up the age scale.

5) Some people can earn money through games : There are an increasing number of ways where avid players can earn real money from activities within a Game.  From trading of items to selling of characters, this market is estimated at over $1 billion in 2008, and is growing. Highly skilled players already earn thousands of dollars per year this way, and with more participants joining through more advanced VR experiences described above, this will attract a group of people who are able to earn a full-time living through these VR worlds.  This will become a viable form of entrepreneurship, just like eBay and Google Ads support entrepreneurial ecosystems today. 

Taking all 5 of these points in combination, the original 2006 prediction appears to be on track.  By 2012, hours spent on television will be half of what they were in 2006, with sports and major live events being the only forms of programming that retain their audience. 

Overall, the prediction seems to be well on track.  Disruptive technologies are in the pipeline, and there is plenty of time for each of these technologies to combine into unprecedented new applications.  Let us see what the second half of the time interval, between now and 2012, delivers. 

July 19, 2009 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (20)

Tweet This! |

SETI and the Singularity

Planetalignment_whiteThe Search for Extra-Terrestrial Intelligence (SETI) seeks to answer one of the most basic questions of human identity - whether we are alone in the universe, or merely one civilization among many.  It is perhaps the biggest question that any human can ponder. 

The Drake Equation, created by astronomer Frank Drake in 1960, calculates the number of advanced extra-terrestrial civilizations in the Milky Way galaxy in existence at this time.  Watch this 8-minute clip of Carl Sagan in 1980 walking the audience through the parameters of the Drake Equation.  The Drake equation manages to educate people on the deductive steps needed to understand the basic probability of finding another civilization in the galaxy, but as the final result varies so greatly based on even slight adjustments to the parameters, it is hard to make a strong argument for or against the existence of extra-terrestrial intelligence via the Drake equation.  The most speculative parameter is the last one, fL, which is an estimation of the total lifespan of an advanced civilization.  Again, this video clip is from 1980, and thus only 42 years after the advent of radio astronomy in 1938.  Another 29 years, or 70%, have since been added to the age of our radio-astronomy capabilities, and the prospect of nuclear annihilation of our civilization is far lower today than in was in 1980.  No matter how ambitious or conservative of a stance you take on the other parameters, the value of fL in terms of our own civilization, continues to rise.  This leads us to our first postulate :

The expected lifespan of an intelligent civilization is rising.       

Carl Sagan himself believed that in such a vast cosmos, that intelligent life would have to emerge in multiple locations, and the cosmos was thus 'brimming over' with intelligent life.  On the other side are various explanations for why intelligent life will be rare.  The Rare Earth Hypothesis argues that the combination of conditions that enabled life to emerge on Earth are extremely rare.  The Fermi Paradox, originating back in 1950, questions the contradiction between the supposed high incidence of intelligent life, and the continued lack of evidence of it.  The Great Filter theory suggests that many intelligent civilizations self-destruct at some point, explaining their apparent scarcity.  This leads to the conclusion that the easier it is for civilization to advance to our present stage, the bleaker our prospects for long-term survival, since the 'filter' that other civilizations collide with has yet to face us.  A contrarian case can thus be made that the longer we go without detecting another civilization, the better. 

Exochart But one dimension that is conspicuously absent from all of these theories is an accounting for the accelerating rate of change.  I have previously provided evidence that telescopic power is also an accelerating technology.  After the invention of the telescope by Galileo in 1609, major discoveries used to be several decades apart, but now are only separated by years.  An extrapolation of various discoveries enabled me to crudely estimate that our observational power is currently rising at 26% per year, even though the first 300 years after the invention of the telescope only saw an improvement of 1% a year.  At the time of the 1980 Cosmos television series, it was not remotely possible to confirm the existence of any extrasolar planet or to resolve any star aside from the sun into a disk.  Yet, both were accomplished by the mid-1990s.  As of May 2009, we have now confirmed a total of 347 extrasolar planets, with the rate of discovery rising quickly.  While the first confirmation was not until 1995, we now are discovering new planets at a rate of 1 per week.  With a number of new telescope programs being launched, this rate will rise further still.  Furthermore, most of the planets we have found so far are large.  Soon, we will be able to detect planets much smaller in size, including Earth-sized planets.  This leads us to our second postulate :

Telescopic power is rising quickly, possibly at 26% a year.  

Extrasolar_Planets_2004-08-31This Jet Propulsion Laboratory chart of exoplanet discoveries through 2004 is very overdue for an update, but is still instructive.  The x-axis is the distance of the planet from the star, and the y-axis is the mass of the planet.  All blue, red, and yellow dots are exoplanets, while the larger circles with letters in them are our own local planets, with the 'E' being Earth.  Most exoplanet discoveries up to that time were of Jupiter-sized planets that were closer to their stars than Jupiter is to the sun.  The green zone, or 'life zone' is the area within which a planet is a candidate to support life within our current understanding of what life is.  Even then, this chart does not capture the full possibilities for life, as a gas giant like Jupiter or Saturn, at the correct distance from a Sun-type star, might have rocky satellites that would thus also be in the life zone.  In other words, if Saturn were as close to the Sun as Earth is, Titan would also be in the life zone, and thus the green area should extend vertically higher to capture the possibility of such large satellites of gas giants.  The chart shows that telescopes commissioned in the near future will enable the detection of planets in the life zone.  If this chart were updated, a few would already be recorded here.  Some of the missions and telescopes that will soon be sending over a torrent of new discoveries are :

Kepler Mission : Launched in March 2009, the Kepler Mission will continuously monitor a field of 100,000 stars for the transit of planets in front of them.  This method has a far higher chance of detecting Earth-sized planets than prior methods, and we will see many discovered by 2010-11.

COROT : This European mission was launched in December 2006, and uses a similar method as the Kepler Mission, but is not as powerful.  COROT has discovered a handful of planets thus far. 

New Worlds Mission : This 2013 mission will build a large sunflower-shaped occulter in space to block the light of nearby stars to aid the observation of extrasolar planets.  A large number of planets close to their stars will become visible through this method. 

Allen Telescope Array : Funded by Microsoft co-founder Paul Allen, the ATA will survey 1,000,000 stars for radio astronomy evidence of intelligent life.  The ATA is sensitive enough to discover a large radio telescope such as the Arecibo Observatory up to a distance of 1000 light years.  Many of the ATA components are electronics that decline in price in accordance with Moore's Law, which will subsequently lead to the development of the..... 

Square Kilometer Array : Far larger and more powerful than the Allen Telescope Array, the SKA will be in full operation by 2020, and will be the most sensitive radio telescope ever.  The continual decline in the price of processing technology will enable the SKA to scour the sky thousands of times faster than existing radio telescopes. 

These are merely the missions that are already under development or even under operation.  Several others are in the conceptual phase, and could be launched within the next 15 years.  So many methods of observation used at once, combined with the cost improvements of Moore's Law, leads us to our third postulate, which few would have agreed with at the time of 'Cosmos' in 1980 :

Thousands of planets in the 'life zone' will be confirmed by 2025. 

Now, we will revisit the under-discussed factor of accelerating change.  Out of 4.5 billion years of Earth's existence, it has only hosted a civilization capable of radio astronomy for 71 years. But as our own technology is advancing on a multitude of fronts, through the accelerating rate of change and the Impact of Computing, each year, the power of our telescopes increases and the signals of intelligence (radio and TV) emitted from Earth move out one more light year.  Thus, the probability for us to detect someone, and for us to be detected by them, however small, is now rising quickly.  Our civilization gained far more in both detectability, and detection-capability, in the 30 years between 1980 and 2010, relative to the 30 years between 1610 and 1640, when Galileo was persecuted for his discoveries and support of heliocentrism, and certainly relative to the 30 years between 70,000,030 and 70,000,000 BC, when no advanced civilization existed on Earth, and the dominant life form was Tyrannosaurus. 

Nikolai Kardashev has devised a scale to measure the level of advancement that a technological civilization has achieved, based on their energy technology.  This simple scale can be summarized as follows :

Type I : A civilization capable of harnessing all the energy available on their planet.

Type II : A civilization capable of harnessing all the energy available from their star.

Type III : A civilization capable of harnessing all the energy available in their galaxy.

The scale is logarithmic, and our civilization currently would receive a Kardashev score of 0.72.  We could potentially achieve full Type I status by the mid-21st century due to a technological singularity.  Some have estimated that our exponential growth could elevate us to Type II status by the late 22nd century.  

This has given rise to another faction in the speculative debate on extra-terrestrial intelligence, a view held by Ray Kurzweil, among others.  The theory is that it takes such a short time (a few hundred years) for a civilization to go from the earliest mechanical technology to reach a technological singularity where artificial intelligence saturates surrounding matter, relative to the lifetime of the home planet (a few billion years), that we are the first civilization to come this far.  Given the rate of advancement, a civilization would have to be just 100 years ahead of us to be so advanced that they would be easy to detect within 100 light years, despite 100 years being such a short fraction of a planet's life.  In other words, where a 19th century Earth would be undetectable to us today, an Earth of the 22nd century would be extremely conspicuous to us from 100 light years away, emitting countless signals across a variety of mediums. 

A Type I civilization within 100 light years would be readily detected by our instruments today.  A Type II civilization within 1000 light years will be visible to the Allen or the Square Kilometer Array.  A Type III would be the only type of civilization that we probably could not detect, as we might have already been within one all along.  We do not have a way of knowing if the current structure of the Milky Way galaxy is artificially designed by a Type III civilization.  Thus, the fourth and final postulate becomes :

A civilization slightly more advanced than us will soon be easy for us to detect.

The Carl Sagan view of plentiful advanced civilizations is the generally accepted wisdom, and a view that I held for a long time.  On the other hand, the Kurzweil view is understood by very few, for even in the SETI community, not that many participants are truly acceleration aware.  The accelerating nature of progress, which existed long before humans even evolved, as shown in Carl Sagan's cosmic calendar concept, also from the 1980 'Cosmos' series, simply has to be considered as one of the most critical forces in any estimation of extra-terrestrial life.  I have not yet migrated fully to the Kurzweil view, but let us list our four postulates out all at once :

The expected lifespan of an intelligent civilization is rising.  

Telescopic power is rising quickly, possibly at 26% a year. 

Thousands of planets in the 'life zone' will be confirmed by 2025. 

A civilization slightly more advanced than us will soon be easy for us to detect.

As the Impact of Computing will ensure that computational power rises 16,000X between 2009 and 2030, and that our radio astronomy experience will be 92 years old by 2030, there are just too many forces that are increasing our probabilities of finding a civilization if one does indeed exist nearby.  It is one thing to know of no extrasolar planets, or of any civilizations.  It is quite another to know about thousands of planets, yet still not detect any civilizations after years of searching.  This would greatly strengthen the case against the existence of such civilizations, and the case would grow stronger by year.  Thus, these four postulates in combination lead me to conclude that :

2030

 

 

 

 

Most of the 'realistic' science fiction regarding first contact with another extra-terrestrial civilization portrays that civilization being domiciled relatively nearby.  In Carl Sagan's 'Contact', the civilization was from the Vega star system, just 26 light years away.  In the film 'Star Trek : First Contact', humans come in contact with Vulcans in 2063, but the Vulcan homeworld is also just 16 light years from Earth.  The possibility of any civilization this near to us would be effectively ruled out by 2030 if we do not find any favorable evidence.  SETI should still be given the highest priority, of course, as the lack of a discovery is just as important as making a discovery of extra-terrestrial intelligence. 

If we do detect evidence of an extra-terrestrial civilization, everything about life on Earth will change.  Both 'Contact' and 'Star Trek : First Contact' depicted how an unprecedented wave of human unity swept across the globe upon evidence that humans were, after all, one intelligent species among many.  In Star Trek, this led to what essentially became a techno-economic singularity for the human race.  As shown in 'Contact', many of the world's religions were turned upside down upon this discovery, and had to revise their doctrines accordingly.  Various new cults devoted to the worship of the new civilization formed almost immediately. 

If, however, we are alone, then according to many Singularitarians, we will be the ones to determine the destiny of the cosmos.  After a technological singularity in the mid-21st century that merges our biology with our technology, we would proceed to convert all matter into artificial intelligence, make use of all the elementary particles in our vicinity, and expand outward at speeds that eventually exceed the speed of light, ultimately saturating the entire universe with out intelligence in just a few centuries.  That, however, is a topic for another day.   

May 23, 2009 in Accelerating Change, Core Articles, Space Exploration, The Singularity | Permalink | Comments (28) | TrackBack (0)

Tweet This! |

The Impact of Computing : 78% More per Year, v2.0

Anyone who follows technology is familiar with Moore's Law and its many variations, and has come to expect the price of computing power to halve every 18 months.  But many people don't see the true long-term impact of this beyond the need to upgrade their computer every three or four years.  To not internalize this more deeply is to miss financial opportunities, grossly mispredict the future, and be utterly unprepared for massive, sweeping changes to human society.  Hence, it is time to update the first version of this all-important article that was written on February 21, 2006.

Today, we will introduce another layer to the concept of Moore's Law-type exponential improvement. Consider that on top of the 18-month doubling times of both computational power and storage capacity (an annual improvement rate of 59%), both of these industries have grown by an average of approximately 12% a year for the last fifty years. Individual years have ranged between +30% and -12%, but let us say that the trend growth of both industries is 12% a year for the next couple of decades.

So, we can conclude that a dollar gets 59% more power each year, and 12% more dollars are absorbed by such exponentially growing technology each year. If we combine the two growth rates to estimate the rate of technology diffusion simultaneously with exponential improvement, we get (1.59)(1.12) = 1.78

The Impact of Computing grows at a scorching pace of 78% a year.

Sure, this is a very imperfect method of measuring technology diffusion, but many visible examples of this surging wave present themselves.  Consider the most popular television shows of the 1970s, where the characters had all the household furnishings and electrical appliances that are common today, except for anything with computational capacity. Yet, economic growth has averaged 3.5% a year since that time, nearly doubling the standard of living in the United States since 1970. It is obvious what has changed during this period, to induce the economic gains.

We can take the concept even closer to the present.  Among 1990s sitcoms, how many plot devices would no lon ger exist in the age of cellphones and Google Maps?  Consider the episode of Seinfeld entirely devoted to the characters not being able to find their car, or each other, in a parking structure (1991).  Or this legendary bit from a 1991 episode in a Chinese restaurant.  These situations are simply obsolete in the era of cellphones.  This situation (1996) would be obsolete in the era of digital cameras, while the 'Breakfast at Tiffany's' situation would be obsolete in an era of Netflix and YouTube. 

In the 1970s, there was virtually no household product with a semiconductor component.  In the 1980s, many people bought basic game consoles like the Atari 2600, had digital calculators, and purchased their first VCR, but only a fraction of the VCR's internals, maybe 20%, comprised of exponentially deflating semiconductors, so VCR prices did not drop that much per year.  In the early 1990s, many people began to have home PCs. For the first time, a major, essential home device was pegged to the curve of 18-month halvings in cost per unit of power.  In the late 1990s, the PC was joined by the Internet connection and the DVD player. 

Now, I want everyone reading this to tally up all the items in their home that qualify as 'Impact of Computing' devices, which is any hardware device where a much more powerful/capacious version will be available for the same price in 2 years.  You will be surprised at how many devices you now own that did not exist in the 80s or even the 90s.

Include : Actively used PCs, LCD/Plasma TVs and monitors, DVD players, game consoles, digital cameras, digital picture frames, home networking devices, laser printers, webcams, TiVos, Slingboxes, Kindles, robotic toys, every mobile phone, every iPod, and every USB flash drive.  Count each car as 1 node, even though modern cars may have $4000 of electronics in them.

Do not include : Tube TVs, VCRs, film cameras, individual video games or DVDs, or your washer/dryer/oven/clock radio just for having a digital display, as the product is not improving dramatically each year. 

How many 'Impact of Computing' Nodes do you currently own?
Under 10
11-15
16-20
21+
  
Free polls from Pollhost.com

If this doesn't persuade people of the exponentially accelerating penetration of information technology, then nothing can.

To summarize, the number of devices in an average home that are on this curve, by decade :

1960s and earlier : 0

1970s : 0-1

1980s : 1-2

1990s : 3-4

2000s : 6-12

2010s : 15-30

2020s : 40-80

The average home of 2020 will have multiple ultrathin TVs hung like paintings, robots for a variety of simple chores, VR-ready goggles and gloves for advanced gaming experiences, sensors and microchips embedded into clothing, $100 netbooks more powerful than $10,000 workstations of today, surface computers, 3-D printers, intelligent LED lightbulbs with motion-detecting sensors, cars with features that even luxury models of today don't have, and at least 15 nodes on a home network that manages the entertainment, security, and energy infrastructure of the home simultaneously. 

At the industrial level, the changes are even greater.  Just as telephony, photography, video, and audio before them, we will see medicine, energy, and manufacturing industries become information technology industries, and thus set to advance at the rate of the Impact of Computing.  The economic impact of this is staggering.  Refer to the Future Timeline for Economics, particularly the 2014, 2024, and 2034 entries.  Deflation has traditionally been a bad thing, but the Impact of Computing has introduced a second form of deflation.  A good one. 

Plasma It is true that from 2001 to 2009, the US economy has actually shrunk in size, if measured in oil, gold, or Euros.  To that, I counter that every major economy in the world, including the US, has grown tremendously if measured in Gigabytes of RAM, TeraBytes of storage, or MIPS of processing power, all of which have fallen in price by about 40X during this period.  One merely has to select any suitable product, such as a 42-inch plasma TV in the chart, to see how quickly purchasing power has risen.  What took 500 hours of median wages to purchase in 2002 now takes just 40 hours of median wages in 2009.  Pessimists counter that computing is too small a part of the economy for this to be a significant prosperity elevator.  But let's see how much of the global economy is devoted to computing relative to oil (let alone gold).

Oil at $50/barrel amounts to about $1500 Billion per year out of global GDP.  When oil rises, demand falls, and we have not seen oil demand sustain itself to the extent of elevating annual consumption to more than $2000 Billion per year.

Semiconductors are a $250 Billion industry and storage is a $200 Billion industry.  Software, photonics, and biotechnology are deflationary in the same way as semiconductors and storage, and these three industries combined are another $500 Billion in revenue, but their rate of deflation is less clear, so let's take just half of this number ($250 Billion) as suitable for this calculation.

So $250B + $200B + $250B = $700 Billion that is already deflationary under the Impact of Computing.  This is about 1.5% of world GDP, and is a little under half the size of global oil revenues. 

The impact is certainly not small, and since the growth rate of these sectors is higher than that of the broader economy, what about when it becomes 3% of world GDP?  5%?  Will this force of good deflation not exert influcence on every set of economic data?  At the moment, it is all but impossible to get major economics bloggers to even acknowledge this growing force.  But over time, it will be accepted as a limitless well of rising prosperity. 

12% more dollars spent each year, and each dollar buys 59% more power each year.  Combine the two and the impact is 78% more every year. 

Related :

A Future Timeline for Economics

Economic Growth is Exponential and Accelerating

Are You Acceleration Aware?

Pre-Singularity Abundance Milestones

The Technological Progression of Video Games

 

April 20, 2009 in Accelerating Change, Computing, Core Articles, Technology, The Singularity | Permalink | Comments (41) | TrackBack (0)

Tags: computing, future, Moore's Law

Tweet This! |

Nanotechnology : Bubble, Bust, ....Boom?

All of us remember the dot-com bubble, the crippling bust that eventually was a correction of 80% from the peak, and the subsequent moderated recovery.  This was easy to notice as there were many publicly traded companies that could be tracked daily.

I believe that nanotechnology underwent a similar bubble, peaking in early 2005, and has been in a bust for the subsequent four years.  Allow me to elaborate.

Nanotech By 2004, major publications were talking about nanotech as if it was about to surge.  Lux Capital was publishing a much-anticipated annual 'Nanotech Report'.  There was even a company by the name of NanoSys that was preparing for an IPO in 2004.  BusinessWeek even had an entire issue devoted to all things nanotech in February 2005.  We were supposed to get excited. 

But immediately after the BusinessWeek cover, everything seemed to go downhill.  Nanosys did not conduct an IPO, nor did any other company.  Lux Capital only published a much shorter report by 2006, and stopped altogether in 2007 and 2008.  No other major publication devoted an entire issue to the topic of nanotechnology.  Venture capital flowing to nanotech ventures dried up.  Most importantly, people stopped talking about nanotechnology altogether.  Not many people noticed this because they were too giddy about their home prices rising, but to me, this shriveling of nano-activity had uncanny parallels to prior technology slumps. 

The rock bottom was reached at the very end of 2008.  Regular readers will recall that on January 3, 2009, I noticed that MIT Technology Review conspicuously omitted a section titled 'The Year in Nanotech' among their year-end roundup of innovations for the outgoing year.  I could not help but wonder why they stopped producing a nanotech roundup altogether, and I subsequently concluded that we were in a multi-year nanotech winter, and that the MIT Technology Review omission marked the lowest point.

Forest But there are signs that nanotech is on the brink of emerging from its chrysalis.  The university laboratories are humming again, promising to draw the genie out of its magic lamp.  In just the first 12 weeks of 2009, carbon nanotubes, after staying out of the news for years, have suddenly been making headlines.  Entire 'forests' of nanotubes are now being grown (image from MIT Tech Review) and can be used for a variety of previously unrelated applications.  Beyond this, there is suddenly activity in nanotube electronics, light-sensitive nanotubes, nanotube superbatteries, and even nanotube muscles that are as light as air, flexible as rubber, but stronger than steel.  And all this is just nanotubes.  Nanomedicine, nanoparticle glue, and nanosensors are also joining the party.  All this bodes well for the prospect of catching up to where we currently should be on the trendline of molecular engineering, and enabling us to build what was previously impossible. 

The recovery out of the four-year nanotech winter could not be happening at a better time.  Nanotech is thus set to be one of the four sectors of technology (the others being solar energy, surface computing, and wireless data) that pull the global economy into its next expansion starting in late 2009. 

Related :

Milli, Micro, Nano, Pico

March 23, 2009 in Accelerating Change, Nanotechnology, Technology, The Singularity | Permalink | Comments (21) | TrackBack (0)

Tags: nanotech

Tweet This! |

A Future Timeline for Economics

The accelerating rate of change in many fields of technology all manifest themselves in terms of human development, some of which can be accurately tracked within economic data.  Contrary to what the media may peddle and despite periodic setbacks, average human prosperity is rising at a rate faster than any other time in human history.  I have described this in great detail in prior articles, and I continue to be amazed at how little attention is devoted to the important subject of accelerating economic growth, even by other futurists.

The time has thus come for making specific predictions about the details of future economic advancement.  I hereby present a speculative future timeline of economic events and milestones, which is a sibling article to Economic Growth is Exponential and Accelerating, v2.0. 

2008-09 : A severe US recession and global slowdown still results in global PPP economic growth staying positive in calendar 2008 and 2009.  Negative growth for world GDP, which has not happened since 1973, is not a serious possibility, even though the US and Europe experience GDP contraction in this period.  The world GDP growth rate trendline resides at growth of 4.5% a year.

2010 : World GDP growth rebounds strongly to 5% a year.  More than 3 billion people now live in emerging economies growing at over 6% a year.  More than 80 countries, including China, have achieved a Human Development Index of 0.800 or higher, classifying them as developed countries. 

2011 : Economic mismanagement in the US leads to a tax increase at the start of 2011, combined with higher interest rates on account of the budget deficit.  This leads to a near-recession or even a full recession in the US, despite the recovery out of the 2008-09 recession still being young. 

2012 : Over 2 billion people have access to unlimited broadband Internet service at speeds greater than 1 mbps, a majority of them receiving it through their wireless phone/handheld device. 

2013 : Many single-family homes in the US, particularly in California, are still priced below the levels they reached at the peak in 2006, as predicted in early 2006 on The Futurist.  If one adjusts for cost of capital over this period, many California homes have corrected their valuations by as much as 50%. 

2014 : The positive deflationary economic forces introduced by the Impact of Computing are now large and pervasive enough to generate mainstream attention.  The semiconductor and storage industries combined exceed $800 Billion in size, up from $450 Billion in 2008.  The typical US household is now spending $2500 a year on semiconductors, storage, and other items with rapidly deflating prices per fixed performance.  Of course, the items puchased for $2500 in 2014 can be purchased for $1600 in 2015, $1000 in 2016, $600 in 2017, etc. 

2015 : As predicted in early 2006 on The Futurist, a 4-door sedan with a 240 hp engine, yet costing only 5 cents/mile to operate (the equivalent of 60 mpg of gasoline), is widely available for $35,000 (which is within the middle-class price band by 2015). This is the result of combined advances in energy, lighter nanomaterials, and computerized systems.

2016 : Medical Tourism introduces $100B/year of net deflationary benefit to healthcare costs in the US economy.  Healthcare inflation is slowed, except for the most advanced technologies for life extension. 

2017 : China's per-capita GDP on a PPP basis converges with the world average, resulting in a rise in the Yuan exchange rate.  This is neither good nor bad, but very confusing for trade calculations.  A recession ensues while all the adjustments are sorted out. 

2018 : Among new cars sold, gasoline-only vehicles are now a minority.  Millions of vehicles are electrically charged through solar panels on a daily basis, relieving those consumers of a fuel expenditure that was as high as $3000 a year in 2008.  Some electrical vehicles cost as little as 1 cent/mile to operate. 

2019 : The Dow Jones Industrial Average surpasses 25,000.  The Nasdaq exceeds 5000, finally surpassing the record set 19 years prior in early 2000. 

2020 : World GDP per capita surpasses $15,000 in 2008 dollars (up from $8000 in 2008).  Over 100 of the world's nations have achieved a Human Development Index of 0.800 or higher, with the only major concentrations of poverty being in Africa and South Asia.  The basic necessities of food, clothing, literacy, electricity, and shelter are available to over 90% of the human race. 

Trade between India and the US touches $400 Billion a year, up from only $32 Billion in 2006. 

2022 : Several millon people worldwide are each earning over $50,000 a year through web-based activities.  These activities include blogging, barter trading, video production, web-based retail ventures, and economic activites within virtual worlds.  Some of these people are under the age of 16.  Headlines will be made when a child known to be perpetually glued to his video game one day surprises his parents by disclosing that he has accumulated a legitimate fortune of more than $1 million. 

2024 : The typical US household is now spending over $5000 a year on products and services that are affected by the Impact of Computing, where value received per dollar spent rises dramatically each year.  These include electronic, biotechnology, software, and nanotechnology products.  Even cars are sometimes 'upgraded' in a PC-like manner in order to receive better technology, long before they experience mechanical failure.  Of course, the products and services purchased for this $5000 in 2024 can be obtained for $3200 in 2025, $2000 in 2026, $1300 in 2027, etc. 

2025 : The printing of solid objects through 3-D printers is inexpensive enough for such printers to be common in upper-middle-class homes.  This disrupts the economics of manufacturing, and revamps most manufacturing business models. 

2027 : 90% of humans are now living in nations with a UN Human Development Index greater than 0.800 (the 2008 definition of a 'developed country', approximately that of the US in 1960).  Many Asian nations have achieved per capita income parity with Europe.  Only Africa contains a major concentration of poverty. 

2030 : The United States still has the largest nominal GDP among the world's nations, in excess of $50 Trillion in 2030 dollars.  China's economy is a close second to the US in size.  No other country surpasses even half the size of either of the two twin giants. 

The world GDP growth rate trendline has now surpassed 5% a year.  As the per capita gap has reduced from what it was in 2000, the US now grows at 4% a year, while China grows at 6% a year. 

10,000 billionaires now exist worldwide, causing the term to lose some exclusivity. 

2032 : At least 2 TeraWatts of photovoltaic capacity is in operation worldwide, generating 8% of all energy consumed by society.  Vast solar farms covering several square miles are in operation in North Africa, the Middle East, India, and Australia.  These farms are visible from space. 

2034 : The typical US household is now spending over $10,000 a year on products and services that are affected by the Impact of Computing.  These include electronic, biotech, software, and nanotechnology products.  Of course, the products and services purchased for this $10,000 in 2034 can be obtained for $6400 in 2035, $4000 in 2036, $2500 in 2037, etc. 

2040 : Rapidly accelerating GDP growth is creating astonishing abundance that was unimaginable at the start of the 21st century.  Inequality continues to be high, but this is balanced by the fact that many individual fortunes are created in extremely short times.  The basic tools to produce wealth are available to at least 80% of all humans. 

Greatly increased lifespans are distorting economics, mostly for the better, as active careers last well past the age of 80. 

Tourism into space is affordable for upper middle class people, and is widely undertaken. 

________________________________________________________

I believe that this timeline represents a median forecast for economic growth from many major sources, and will be perceived as too optimistic or too pessimistic by an equal number of readers.  Let's see how closely reality tracks this timeline.

September 28, 2008 in Accelerating Change, China, Computing, Core Articles, Economics, Energy, India, The Singularity | Permalink | Comments (56)

Tags: Accelerating, China, Economics, Economy, Event Horizon, Future, GDP, Moore's Law, Singularity

Tweet This! |

The Futurist's Stock Portfolio for 2009

Today, September 15, 2008, represented just about a perfect day for buying new equity positons.  I am going to present my 2009 portfolio, that will be tracked over the next 15.5 months between now and the end of 2009, in relation to the S&P500 index.  My 2008 portfolio is still current, and will be evaluated at the end of 2008, so the start of this 2009 portfolio will overlap with the end of the 2008 portfolio.  To assess my track record, my 2007 portfolio delivered a superb 13.3% return, relative to just 4.3% for the S&P500 over the same period. 

For 2009, the portfolio is quite simple.  I believe that small-cap value and financial stocks are at historically compelling valuations, and have no choice but to rise.  A few major technology stocks are also at attractive valuations. 

So the portfolio will be :

2009 Stock  

This captures the following trends from previous articles on The Futurist :

The Next Big Thing in Entertainment, Part I and Part 2

The Impact of Computing

The Stock Market is Exponentially Accelerating too

I hereby sign and seal this portfolio, bought that the closing prices on September 15, 2008, to be evaluated on the last trading day before December 31, 2009.     

(crossposted on TechSector)

September 15, 2008 in Accelerating Change, Economics, Stock Market | Permalink | Comments (5) | TrackBack (0)

Tweet This! |

Pre-Singularity Abundance Milestones

I am of the belief that we will experience a Technological Singularity around 2050 or shortly thereafter. Many top futurists all arrive at prediction dates between 2045 and 2075. The bulk of Singularity debate revolves not so much around 'if' or even 'when', but rather 'what' the Singularity will appear like, and whether it will be positive or negative for humanity.

To be clear, some singularities have already happened.  To non-human creatures, a technological singularity that overhauls their ecosystem already happened over the course of the 20th century.  Domestic dogs and cats are immersed in a singularity where most of their surroundings surpass their comprehension.  Even many humans have experienced a singularity - elderly people in poorer nations make no use of any of the major technologies of the last 20 years, except possibly the cellular phone.  However, the Singularity that I am talking about has to be one that affects all humans, and the entire global economy, rather that just humans that are marginal participants in the economy.  By definition, the real Technological Singularity has to be a 'disruption in the fabric of humanity'. 

In the period between 2008 and 2050, there are several milestones one can watch for in order to see if the path to a possibile Singularity is still being followed.  Each of these signifies a previously scarce resource becoming almost infinitely abundant (much like paper today, which was a rare and precious treasure centuries ago), or a dramatic expansion in human experience (such as the telephone, airplane, and Internet have been) to the extent that it can even be called a transhuman experience.  The following are a random selection of milestones with their anticipated dates. 

Technological :

Hours spent in videoconferencing surpass hours spent in air travel/airports : 2015

Video games with interactive, human-level AI : 2018

Semi-realistic fully immersive virtual reality : 2020

Over 5 billion people connected to the Internet (mostly wirelessly) at speeds greater than 10 Mbps : 2022

Over 30 network-connected devices in the average household worldwide : 2025

1 TeraFLOPS of computing power costs $1 : 2026

1 TeraWatt of worldwide photovoltaic power capacity : 2027

1 Petabyte of storage costs $1 : 2028

1 Terabyte of RAM costs $1 : 2031

An artificial intelligence can pass the Turing Test : 2040

Biological :

Complete personal genome sequencing costs $1000 : 2020

Cancer is no longer one of the top 5 causes of death : 2025

Complete personal genome sequencing costs $10 : 2030

Human life expectancy achieves Actuarial Escape Velocity for wealthy individuals : 50% chance by 2040

Economic :

Average US household net worth crosses $2 million in nominal dollars : 2024

90% of humans living in nations with a UN Human Development Index greater than 0.800 (the 2008 definition of a 'developed country', approximately that of the US in 1960) : 2025

10,000 billionaires worldwide (nominal dollars) : 2030

World GDP per Capita crosses $50,000 in 2008 dollars : 2045

_________________________________________________________________

Each of these milestones, while not causing a Singularity by themselves, increase the probability of a true Technological Singularity, with the event horizon pulled in closer to that date.  Or, the path taken to each of these milestones may give rise to new questions and metrics altogether.  We must watch for each of these events, and update our predictions for the 'when' and 'what' of the Singularity accordingly. 

Related : The Top 10 Transhumanist Technologies

September 11, 2008 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (24) | TrackBack (0)

Tags: Acceleration, Future, Futurist, Moore's Law, Prosperity, Singularity

Tweet This! |

Can Buildings be 'Printed'?

I have discussed the possibility of 3-D printing of solid objects before, in this article where company #5, Desktop Factory, is detailed.  However, the Desktop Factory product can only produce objects that have a maximum size of 5 X 5 X 5 inches, and it can only use one type of material. 

On the Next Big Future blog, the author quite frequently profiles a future product capable of 'printing' entire buildings.  This technology, known as 'Contour Crafting', can supposedly construct buildings at greater than 10 times the speed, yet at just one-fifth the cost of traditional construction processes.  It is claimed that the first commercial machines will be available in 2008 itself. 

Despite my general optimism, this particular machine does not pass my 'too good to be true' test, at least before 2020.  A machine that could construct homes and commercial buildings at such a speed and cost would cause an unprecedented economic disruption across the world.  There would be a steep but brief depression, as existing real estate loses 90% or more of its value, followed by a huge boom as home ownership becomes affordable to several times as many people as today.  I don't think that we are on the brink of such a revolution.

For me to be convinced, I would have to see :

1) Articles on this device in mainstream publications like The Economist, BusinessWeek, MIT Technology Review, or Popular Mechanics.

2) The ability to at least print simple constructs like concrete perimeter walls or sidewalks at a rate and cost several times superior to current methods.  Only then can more complex structures be on the horizon. 

I will revisit this technology if either of these two conditions is solidly met. 

(crossposted on TechSector). 

September 02, 2008 in Accelerating Change, Economics, Technology, The Singularity | Permalink | Comments (21) | TrackBack (0)

Tags: Construction, Contour Crafting, Printing, Real Estate

Tweet This! |

Surfaces : The Next Killer Ap in Computing

Computing, once seamlessly synonymous with technological progress, has not grabbed headlines in recent memory. We have not had a 'killer ap' in computing in the last few years.  Maybe you can count Wi-fi access to laptops in 2002-03 as the most recent one, but if that is not a sufficiently important innovation, we then have to go all the way back to the graphical World Wide Web browser in 1995.  Before that, the killer ap was Microsoft Office for Windows in 1990.  Clearly, such shifts appear to occur at intervals of 5-8 years. 

I can, without hesitation, nominate surface computing as the next great generational augmentation in the computing experience.  This is because surface computing entirely transforms the human-computer interaction in a matter that is more suitable for the human body than the mouse/keyboard model is. In accordance with the Impact of Computing, rapid drops in the costs of both high-definition displays and tactile sensors are set to bring this experience to consumers by the end of this decade.

Surface

BusinessWeek has a slideshow featuring several different products for surface computing. Over ten major electronics companies have surface computing products available. The most visible is the Microsoft Surface, which sells for about $10,000, but will probably drop to $3000 or less within 3-4 years, enabling household adoption.

As far as early applications of surface computing, a fertile imagination can yield many prospects. For example, a restaurant table may feature a surface that displays the menu, enabling patrons to order simply by touching the picture of the item they choose.  The information is sent to the kitchen, and this saves time and reduces the number of waiters needed by the restaurant (as waiters would only be needed to deliver the completed orders).  Applications for classroom and video game settings also readily present themselves. 

Watch for demonstrations of various surface computers at your local electronics store, and keep an eye on the price drops.  After seeing a demonstration, do share at what pricepoint you might purchase one.  The next generation of computing beckons. 

Related :

The Impact of Computing

(Crossposted on TechSector)

July 11, 2008 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (7) | TrackBack (0)

Tweet This! |

Ten Biotechnology Breakthroughs Soon to be Available

Popular Mechanics has assembled one of those captivating lists of new technologies that will improve our lives, this time on healthcare technologies (via Instapundit).� Just a few years ago, these would have appeared to be works of science fiction.� Go to the article to read about the ten technologies shown below.�

Biotech10_2

Most of these will be available to average consumers within the next 7-10 years, and will extend lifespans while dramatically lowering healthcare costs (mostly through enhanced capabilities of early detection and prevention, as well as shorter recovery times for patients).� This is consistent with my expectation that bionanotechnology is quietly moving along established trendlines despite escaping the notice of most people.� These technologies will also move us closer to Actuarial Escape Velocity, where the rate of lifespan increases exceed that of real time.�

Another angle that these technologies effect is the globalization of healthcare.� We have previously noted the success of 'medical tourism' in US and European patients seeking massive discounts on expensive procedures.� These technologies, given their potential to lower costs and recovery times, are even more suitable for medical offshoring than their predecessors, and thus could further enhance the competitive position of the countries that are quicker to adopt them.� If the US is at the forefront of using the 'bloodstream bot' to unclog arteries, the US thus once again becomes more attractive than getting a traditional procedure done in India or Thailand.� But if the lower cost destinations also adopt these technologies faster than the heavily regulated US, then even more revenue migrates overseas and the US healthcare sector would suffer further deserved blows, and be under even greater pressure to conform to market forces.� As technology once again acts as the great leveler, another spark of hope for reforming the dysfunctional US healthcare sector has emerged.�

These technologies are near enough to availability that you may even consider showing this article to your doctor, or writing a letter to your HMO.� Plant the seed into their minds...

Related :

Actuarial Escape Velocity

How Far Can 'Medical Tourism' Go?

Milli, Micro, Nano, Pico

May 09, 2008 in Accelerating Change, Biotechnology, Computing, Nanotechnology, Technology, The Singularity | Permalink | Comments (11) | TrackBack (0)

Tweet This! |

A Rebuttal to 'Peak Oil' Doomsday Predictions

At The Oil Drum, a detailed article by 'Gail the Actuary' speculates on how declining production of oil combined with rising demand will cause an economic catastrophe, leading to the global economy contracting so severely, that by 2040 it is much smaller than it is today.  The author actually believes that in 2040, most people will no longer be able to afford cars, electricity will be unreliable, and goods and services will be fewer and rarer than today. 

Another article submitted by an different contributor on The Oil Drum arrives at the same pessimistic conclusion, stating that 'economic growth will end one way or another'.  Most of the commenters on both articles are in a groupthink state of agreement that can best be described as a Maoist-Malthusian cult. 

I would normally not bother to rebut something like this, except that this particular essay is so stunningly wrong and annoyingly pessimistic, despite the seemingly meticulous research the author has conducted, that I am compelled to disect how insulated groupthink can spiral into a zone where even the most extreme conclusions are accepted. 

Note that I happen to be someone who actually does believe in Peak Oil theory, but that such a condition generates long-term positives that outweigh short-term negatives. 

The assumptions that the 'Peak Oil' doomsday scenario makes are :

1) That rising oil prices do not cause a long-term downward adjustment in demand.  Oil demand may be inelastic in the short-term, but in the long term, people will buy more efficient cars, carpool, ride bicycles, reduce discretionary trips, conduct more commerce online, etc.  To assume otherwise is to ignore the most basic law of economics.  This is before even accounting for the indirect benefits of declining oil demand such as a drop in traffic fatalities (which cost $2 million apiece to the economy), less wear and tear on roads and tires, less pollution, less real estate consumed by gas stations, less competition for parking spaces, etc. 

2) That rising grain prices will not move consumption away from increasingly expensive meat towards affordable grains, fruits, and vegetables, thereby reducing grain and water demand.  This, too, is economic illiteracy.  If the price of beef triples while the price of rice and potatoes does not, consumption patterns shift.   

3) That there will be very little technological innovation in alternative energy, automobile efficiency, batteries, or information technology from this point on.  In fact, there is innovation in all of those areas, so we have multiple layers of protection against the doomsday scenario, as detailed by these articles :

A Future Timeline for Energy

A Future Timeline for Automobiles

Batteries Set to Advance, Finally

Solar Energy Cost Curve

Terrorism, Oil, Globalization, and the Impact of Computing

4) That most economic growth is not in knowledge-based industries, which consume far less energy per dollar of output.  The US economy today produces twice the financial output per unit of oil consumption as it did in 1975, with information technology rising as a portion of total economic output. 

5) That a major economic downturn, featuring skyrocketing food prices for people in poorer countries, will somehow not translate to a lower birth rate that inhibits population growth and hence curbs demand, and that population projections will somehow not change. 

6) That there will be no humans living beyond the Earth (whether in orbit or on the Moon) by 2040.  The reason this point is relevant is because a society cannot advance in space travel without simultaneous advances in energy technology.  I say that advances in photovoltaic efficiency make Lunar colonies closer to viability by that time. 

7) That we are going to have over 30 years of negative growth in World GDP, despite not having had a single year of negative growth since 1973, and despite the trendline of growth solidly registering at 4.5% a year even today.  I happen to think that by 2040, the world economy will be 4 times larger than it is today.  Even the Great Depression was only 5 years of negative growth, followed by a recovery that elevated prosperity to levels higher than they were in 1929, at a time when World GDP was only at a trendline of 2% annual growth, or less than half the level of today.  Yet Gail the Actuary thinks car ownership will no longer be affordable to most people by 2040. 

Peak oil may be on the horizon, but the US economy has already adapted to oil at sustained prices of $70 or $80/barrel (which is the biggest story that no one is noticing yet), and will soon adapt to $100/barrel.  I want oil to hit a sustained $120/barrel by 2010 to start a virtuous cycle of technological and geopolitical chain reactions that make the world a better place in the long term.  If oil hits $200/barrel, that will cause a deep recession that could last several years, but after that point, we will have adapted out of the oil burden almost entirely, and World GDP growth will resume at 5% a year. 

Could I be wrong and they be right?  Well, let us first see if oil rises substantially above $120/barrel, and if that year has negative World GDP. 

Does anyone feel like defending the doomsday prediction from The Oil Drum?

March 28, 2008 in Accelerating Change, Economics, Energy, Politics | Permalink | Comments (55)

Tweet This! |

Actuarial Escape Velocity

Every now and then, an obscure concept is so brilliantly encapsulated in a compact yet sublime term that it leaves the audience inspired enough to evangelize it. 

I have felt that way ever since I heard the words 'Actuarial Escape Velocity'.

For some background, please refer to an older article from early 2006, 'Are You Prepared to Live to 100?".  Notice the historical uptrend in human life expectancy, and the accelerating rate of increases.  For more, do also read the article "Are You Acceleration Aware?".

In analyzing the rate at which life expectancy is increasing in the wealthiest nations, we see that US life expectancy is now increasing by 0.2 years, every year.  Notably, the death rates from heart disease and cancer have been dropping by a rapid 2-4% each year, and these two leading causes of death are quickly falling off, despite rising obesity and a worsening American diet over the same period.  Just a few decades ago, the rate on increase in life expectancy was slower than 0.2 years per year.  In the 19th century, even the wealthiest societies were adding well under 0.1 years per year.  But how quickly can the rate of increase continue to rise, and does it eventually saturate as each unit of gain becomes increasingly harder to achieve?

Two of the leading thinkers in the field of life extension, Ray Kurzweil and Aubrey de Grey, believe that by the 2020s, human life expectancy will increase by more than one year every year (in 2002 Kurzweil predicted that this would happen as soon as 2013, but this is just another example of him consistently overestimating the rate of change).  This means that death will approach the average person at a slower rate than the rate of technology-driven lifespan increases.  It does not mean all death suddenly stops, but it does mean than those who are not close to death do have a possibility of indefinite lifespan after AEV is reached.  David Gobel, founder of the Methuselah Foundation, has termed this as Actuarial Escape Velocity (AEV), essentially comparing the rate of lifespan extension to the speed at which a spacecraft can surpass the gravitiational pull of the planet it launches from, breaking free of the gravitational force.  Thus, life expectancy is currently, as of 2007 data, rising at 20% of Actuarial Escape Velocity.

I remain unconvinced that such improvements will be reached as soon as Ray Kurzeil and Aubrey de Grey predict.  I will be convinced after we clearly achieve 50% of AEV in developed countries, where six months are added to life expectancy every year.  It is possible that the interval between 50% and 100% of AEV comprises less than a decade, but I'll re-evaluate my assumptions when 50% is achieved. 

Serious research efforts are underway.  The Methuselah Mouse Prize will award a large grant to researchers that can demonstrate substantial increases in the lifespan of a mouse (more from The Economist).  Once credible gains can be demonstrated, funding for the research will increase by orders of magnitude. 

The enormous market demand for lifespan extension technologies is not in dispute.  There are currently 95,000 individuals in the world with a net worth greater than $30 million, including 1125 billionaires.  Accelerating Economic Growth is already growing the ranks of the ultrawealthy at a scorching pace.  If only some percentage of these individuals are willing to pay a large portion of their wealth in order to receive a decade or two more of healthy life, particularly since money can be earned back in the new lease on life, then such treatment already has a market opportunity in the hundreds of billions of dollars.  The reduction in the economic costs of disease, funerals, etc. are an added bonus.  Market demand, however, cannot always supercede the will of nature. 

This is only the second article on life extension that I have written on The Futurist, out of 154 total articles written to date.  While I certainly think aging will be slowed down to the extent that many of us will surpass the century mark, it will take much more for me to join the ranks of those who believe aging can be truly reversed.  To track progress in this field, keep one eye on the rate of decline in cancer and heart disease deaths, and another eye on the Methuselah Mouse Prize.  That such metrics are even advancing on a yearly basis is already remarkable, but monitoring anything more than these two measures, at this time, would be premature. 

So let's find out what the group prediction is, with a poll.  Keep in mind that most people are biased towards believing this date will fall within their own lifetimes (poll closed 7/1/2012) :

AEV

March 25, 2008 in Accelerating Change, Biotechnology, Economics, The Singularity | Permalink | Comments (16) | TrackBack (0)

Tweet This! |

Batteries Set to Advance, Finally

Battery The Economist has a great article on the history and near-future outlook for battery technology. Batteries have scarcely improved in the last century, and there have been too many false starts for a seasoned observer to get his hopes up too easily.  But this chart of battery capacity by unit weight, in particular, is something I have been seeking for a long time.  It vindicates my belief that lithium-ion technology is improving at a rate far faster than traditional nickel batteries (that have scarcely improved at all in the last half-century).  Note, importantly, that if we join the multiple curves, we see a strong indication of the classic accelerating technology exponential curve.  This time we know it's for real. 

This is exciting on multiple levels, because it opens to door to not just mainsteam electical vehicles in the next decade, but to a variety of wearable electronic devices, 20-30 hour laptop batteries, household robotics, and other applications that have not yet been imagined. 

Future projections are usually over-optimistic, you say?  Let's also not forget Stanford University's nanowire research to increase Lithium-ion battery capacity, which was wide acclaimed as among the most important scientific breakthroughs of 2007. 

Related :

A Future Timeline for Energy

A Future Timeline for Automobiles

Why I Want Oil to Hit $120 per Barrel

March 11, 2008 in Accelerating Change, Energy, Nanotechnology, Technology | Permalink | Comments (10) | TrackBack (0)

Tweet This! |

Is Technology Diffusion in a Lull?

There are minor but growing elements of evidence that the rate of technological change has moderated in this decade.  Whether this is a temporary trough that merely precedes a return to the trendline, or whether the trendline itself was greatly overestimated, will not be decisively known for some years.  In this article, I will attempt to examine some datapoints to determine whether we are at, or behind, where we would expect to be in 2008. 

There is overwhelming evidence that many seemingly unrelated technologies are progressing at an accelerating rate.  However, the exact magnitude to the accelerating gradient - the second derivative - is difficult to measure with precision.  Furthermore, there are periods where advancement can be significantly above or below any such trendline. 

This brings us to the chart below from Ray Kurzweil (from Wikipedia) :

752pxpptmassuseinventionslogprint_2

This chart appears prominently in many of Kurzweil's writings, and brilliantly conveys the concept of how each major consumer technology reached the mainstream (as defined by a 25% US household penetration rate) in successively shorter times.  The horizontal axis represents the year in which the technology was invented. 

This chart was produced some years ago, and therein lies the problem.  If we were to update the chart to the present day, which technology would be the next addition after 'The Web'? 

Many technologies can claim to be the ones to occupy the next position on the chart.  IPods and other portable mp3 players, various Web 2.0 applications like social networking, and flat-panel TVs all reached the 25% level of mainstream adoption in under 6 years in accordance with an extrapolation of the chart through 2008.  However, it is debatable that any of these are 'revolutionary' technologies like the ones on the chart, rather than merely increments above incumbent predecessors.  The iPod merely improved upon the capacity and flexibility of the walkman, the plasma TV merely consumed less space than the tube TV, etc.  The technologies on the chart are all infrastructures of some sort, and it is clear that after 'The Web', we are challenged to find a suitable candidate for the next entry. 

Thus, we either are on the brink of some overdue technology emerging to reach 25% penetration of US households in 6 years or less, or the rapid diffusion of the Internet truly was a historical anomaly, and for the period from 2001 to 2008 we were merely correcting back to a trendline of much slower diffusion (where it take 10-15 years for a technology to each 25% penetration in the US).  One of the two has to be true, at least for an affluent society like the US.

This brings us to the third and final dimension of possibility.  This being the decade of globalization, with globalization itself being an expected natural progression of technological change, perhaps a US-centric chart itself was inappropriate to begin with.  Landline telephones and television sets still do not have 25% penetration in countries like India, but mobile phones jumped from zero to 10% penetration in under 7 years.  The oft-cited 'leapfrogging' of technologies that developing nations can benefit from is a crucial piece of technological diffusion, which would thus show a much smaller interval between 'telephones' and 'mobile phones' than in the US-based chart above.  Perhaps '10% Worldwide Household Penetration' is a more suitable measure than '25% US Household Penetration', which would then possibly show that there is no lull in worldwide technological adoption at all. 

I may try to put together this new worldwide chart.  The horizontal axis would not change, but the placement of datapoints along the vertical axis would.  Perhaps Kurzweil merely has to break out of US-centricity in order to strengthen his case and rebut most of his critics. 

The future will disclose the results to us soon enough.

(crossposted on TechSector)

Related :

Are You Acceleration Aware?

The Impact of Computing

These are the Best of Times

February 19, 2008 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (28) | TrackBack (0)

Tweet This! |

»

Search

Ads

Categories

  • About
  • Accelerating Change
  • Artificial Intelligence
  • ATOM AotM
  • Biotechnology
  • China
  • Comedy
  • Computing
  • Core Articles
  • Economics
  • Energy
  • India
  • Nanotechnology
  • Political Debate
  • Politics
  • Science
  • Space Exploration
  • Stock Market
  • Technology
  • The ATOM
  • The Misandry Bubble
  • The Singularity

Recent Posts

  • ATOM Award of the Month, January 2021
  • Comment of the Year - 2020
  • ATOM Award of the Month, November 2020
  • More ATOM Proof Piles Up
  • ATOM Award of the Month, August 2020
  • ATOM Award of the Month, June 2020
  • ATOM Webcast on Covid-19
  • The Accelerating TechnOnomic Medium, v2.0
  • ATOM Award of the Month, February 2020
  • Comment of the Year - 2019
Subscribe to this blog's feed

Reference

  • The Economist
  • KurzweilAI
  • MIT Technology Review

Archives

  • January 2021
  • December 2020
  • November 2020
  • September 2020
  • August 2020
  • June 2020
  • April 2020
  • February 2020
  • December 2019
  • November 2019

More...

© The Futurist