The Property Valuation Reckoning is Imminent

The Property Valuation Reckoning is Imminent

Over the last few years we’ve seen technology ride into the real estate industry much like Clint Eastwood riding into town in one of his Westerns. With whips cracking, eerie shrieks, and the slow plucking of a Fender electric guitar playing in the background, he’s ready to take on whatever land baron or mining tycoon is oppressing the town. Sitting atop his horse, cowboy hat slightly cocked to the side, hand-rolled cigar hanging from the corner of his mouth, and suspiciously looking over the townspeople with squinted-eyes, everyone fears, or at least notices, his arrival. In every one of those movies, havoc was wreaked on the establishment (meaning he killed them all) and a new way of life was ushered in for the people. But unlike the Pale Rider, technology has not had the same type of disruptive effect on the real estate industry.

Though there are thousands of PropTech “startups” and it seems every large player in the industry has established some sort of venture fund or innovation group out of fear of losing business or becoming obsolete, real estate hasn’t been significantly impacted by technological disruption. Despite the numerous claims of “disruption” and “transformation,” technology hasn’t provided much more than incremental improvements other than a few exceptions. Technology has, however, forced the industry to question everything it does. It’s put a focus on efficiency, new capabilities, and a rethinking of how we can develop strategies around technology to modernize the real estate industry and take advantage of the enormous financial and social opportunities it presents.

Read more from propmodo

Placing Value On Our Valuation Methods

At its core, real estate is about value. Whether that’s value derived from the price of a property, how a space maximizes a firm’s productivity and potential, or what it adds to the attractiveness of a city or neighborhood. It’s about maximizing revenue, designing spaces people want to be in and pay to be in, increasing efficiencies, lowering costs, etc. Capturing these factors (and many others) is essential to understanding the true value of a property.

But real estate is complex, a mixture of quantitative, qualitative, and behavioral factors that are difficult to capture and make sense of. Data is often slow to disseminate and most data is vague or incomplete, with numerous details not included. Every day analysts and associates at real estate firms build thousands of models for properties, mostly in Excel. They make sure to check all their formulas, size margins properly, ensure the colors are pretty-ish, and tweak the inputs here and there to get to that final number – the value of the property.

But those models are only as good as the data we put into them. There’s a difference between data and good data; between incomplete data and detailed, consistent data. We can still run models on bad or limited data and still get answers, but that doesn’t mean they’re good answers and it doesn’t improve our ability to make better decisions. It certainly doesn’t improve our ability to automate the valuation process with any sort of confidence that the output will be accurate, accounting for all the nuances of leases, physical condition, economic or capital markets factors.

We often account for these factors mentally, reviewing numerous websites for information, walking a property to get an idea of the physical condition, poring over the details of rent rolls, financials, and lease contracts. We then make decisions in our head as to how the cap rate or interest rate should be adjusted, what kind of reserves we want to put aside, and, ultimately, what price we’re willing to pay or sell a property for. These are all decisions that we simply don’t have the right kind/amount of data needed to build automated systems that replicate our current valuation process.

Although we don’t have the data and tools to create anything close to full automation, it doesn’t mean that we can’t improve our approach. While there are challenges to developing better valuation systems and the mythical “Fully Automated Valuation Model” doesn’t exist for real estate, there is a big gap between our current methods of automation and where we could be based just on existing technology, much less systems and data collection methods that we can start developing now to vastly improve our capabilities in the future.

Below, I’d like to explore three main areas that I believe need to be addressed in order to improve our ability to automate more of the valuation process and our ability to make that process more holistically accurate. While entire papers or books could be written about each one of these, I’ll touch on some high-level aspects that can be used to get conversations started.

  1. Increasing the availability of detailed, property-specific information, including both operational (ongoing) activities and transactions (sale, refinance, etc.).
  2. The collection and analysis of macroeconomic, microeconomic, and capital markets influences that affect the real estate industry.
  3. Using concepts such as systems thinking, systems engineering, and advanced technology such as artificial intelligence, machine learning, and deep learning to design semi-automated models that capture and make sense of both property-specific and larger capital markets factors.

Before I discuss those three areas, however, I would like to briefly touch on some aspects of the importance of good data and using technology that I think are necessary in order to put the significance of these areas in context.

The importance of good data

Good data is essential for making the right decisions, something that has been well-known in the military for millennia. The right intelligence can make the difference between winning a battle/war and losing that same battle or war. Governments spend inordinate amounts of money on human and technological intelligence capabilities to make sure they have every detail of information they possibly can so they can make better decisions.

A great example of this is the difference in intelligence between the Israelis and the Egyptians during the 1967 Six-Day War. Israel was outnumbered nearly two to one in manpower, two to one in tanks, seven to one in artillery, three to one in aircraft, and four to one in warships. What they did have was an extremely intricate intelligence gathering network. In the years leading up to the war, Israeli intelligence was able to pinpoint the location of every aircraft and the name of every pilot in the Egyptian Air Force. They knew the Egyptian base commanders, their schedules, the capabilities of their communications networks, even the menus Egyptian soldiers were being served.

After years of building tensions, the war finally broke out on the morning of June 5th with Israel making a pre-emptive strike against Egypt and several other nations. Within the first few hours of the war, nearly three-quarters of the Egyptian Air Force was destroyed, all of Egypt’s airfields were heavily damaged, and much of Egypt’s other fighting equipment was destroyed as well. The Israelis effectively countered a force twice their size with an amazingly detailed set of information.

While valuing a property doesn’t (hopefully) have the massive geopolitical consequences of war, it does highlight a commonality: better information and better data will often lead you to more effective decisions, regardless of the area of application.

The real estate industry has not traditionally been strong at collecting data in formats that allow it to be stored and combined for analysis. Large firms who see more activity and transactions are often segmented by group: asset management, lending, appraisal, leasing, etc. While there is an enormous amount of data generated on a daily basis, it’s not being captured and then consolidated in a way that could lead to better insights. This represents an opportunity to make two shifts in handling data that could be enormously beneficial to firms and the larger industry.

First, methods need to be developed that capture and store data on individual transactions in a more systematic way. This would allow the details of each transaction that are now stored in thousands of individual spreadsheets spread across different divisions of firms to be captured more formally.

Second, barriers to sharing data across different groups must be reduced. While much of the data is the same, each group serves a slightly different function and they often generate peripheral types of data that could be useful either to different groups in the firm or for much deeper analysis to determine trends or opportunities.

The limits of technology

I don’t want the focus of this article to be about the technology. My feeling is that there is way, way too much emphasis on the technology and way too little emphasis on applying technology as a tool to solve the more fundamental challenges of the real estate industry. Technology applied to surface level activities will have surface level impact, but understanding the deeper layers of how we make decisions and what’s needed to make better decisions will allow us to apply the tool (technology) to the problem (real estate) in the right way to give us much more insight.

Unfortunately, however, much of what I hear in regards to PropTech is about the “API’s” and the “artificial intelligence” (AI) or “machine learning” (ML) or “deep learning.” The fact is, we simply don’t have massive data sets of detailed property, lease, and transactional information to apply AI/ML/deep learning algorithms to. And until we do, there will remain a limit to the level of automation we can achieve. There will have to be human intervention to fill in the blanks and refine the output based on these missing fields.

The importance of small details

Most of the models we build put a value on a number – NOI or IRR. We only qualitatively, and only sometimes, at that, account for the tiny nuances of a lease that can dramatically affect the certainty of the revenue from each lease. Yet the details of leases for commercial and multifamily tenants can have a dramatic difference in the lease rate and the value of the property.

Let’s use a hypothetical office building to illustrate. The building below just signed two new leases. For arguments sake let’s say the leases were signed on the exact same day for the exact same square footage and the exact same 10-year term. Tenant A is paying $30 per square foot and Tenant B is paying $40. Why such a big difference? Market conditions would be the same for both leases, they’re taking the same size and for the same term, so it doesn’t seem there should be so much difference in their rates. But there could be lots of reasons for this difference and there could be hundreds of different combinations of these factors.

For example, perhaps Tenant B required $50 PSF in tenant improvements (TI’s) while Tenant A did the build-out for their space at their own cost. Maybe Tenant B asked for an early termination clause after year 5 of the lease with no penalty while Tenant A has no early termination clause. Maybe Tenant A is a corporation with a single A credit rating and a corporate guarantee on the lease while Tenant B is not offering a corporate guarantee. What they do with the space may also be different. Tenant A could be an insurance office that agreed to have air and heat turned off from 7pm to 6am while Tenant B is a high-frequency trading firm that needs their server room cooled 24 hours per day and incurs much greater electricity costs. These are just a few of the dozens of reasons there could be differences in the lease rates and any of these could present in any number of combinations that affect the certainty (risk), thus the value, of the cash flows of the lease.

The problem is that these details beyond the tenant, square footage, lease rate, and sometimes the term are rarely reported. If they are, it’s for an extremely small number of existing leases. This makes it unrealistic for any sort of algorithm, even AI or ML, to make sense of whether leases in the building should be valued at $30 or $40 per square foot. And this has a significant impact on the ability to develop automated valuation systems.

Creating data warehouses

There are two challenges to developing better data sets in real estate. First, information is scattered and unstructured. Purchase contracts and leases for commercial properties are typically drafted individually by attorneys, which means the terms, structure, and location of terms for the purchase/lease are different from one contract to the next. This makes them difficult to analyze with Natural Language Processing (NLP) or other text scraping techniques. Excel models built by analysts are also bespoke and unique, making it difficult to write software that can accurately crawl historical Excel models to extract information.

Second, many of our decisions are mentally referenced when determining pricing and risk of leases/properties and are not captured at all when running a model. We look at economic factors, tenant credit, borrower strength, etc., but we usually don’t capture this information, much less quantify it in our models in any way. We say the “borrower doesn’t have a strong balance sheet” or that the tenants are “largely mom and pop.” We tweak the mortgage rate we think the lender will offer, and put a premium on the cap rate to account for these risks. But nowhere in this process is that information stored for later analysis nor does it teach us much about our thought process for how much of a premium we added to the cap rate to account for the additional risk. Was it .25% or .5%? This would likely make a meaningful difference in the value of that property.

The problem with collecting this information lies less with the technological capabilities and more with the willingness of individuals and firms to share these details publicly. Data is a source of competitive advantage, especially for large firms who see a significantly higher number of transactions than smaller firms. The protection of this data prevents larger databases from being compiled, in turn keeping the market opaque rather than transparent.

Sign Up for the undefined Newsletter

In the section on the importance of good data above, I touched on methods for firms to improve in capturing and using data, but the insights from their internal data will nevertheless be limited compared to a world where data is shared across firms for a much more complete view of the market.

Real estate has become more correlated with larger capital markets

Until about twenty or thirty years ago, real estate value was driven largely by factors at the local or regional level. Most loans were made by local banks that knew the area well and valuations were driven by local or regional economic factors. That has all changed. Real estate has penetrated the larger capital markets and is now much more affected by national and global economic trends. The advent of CMBS, the growth of REITs, and the inclusion of real estate as one of the major asset classes has put real estate and real estate related securities in the same ballpark as traditional fixed-income and equity investments.

This increased exposure hasn’t removed local or regional risk to properties, however. It has simply added another layer of risk that wasn’t as prevalent in the past. We still need to know the local market conditions such as vacancy rates, leasing trends, cap rates, employment, and other local economic factors. We still need to know the specifics of the tenants in the buildings, but we also need to get better at understanding how larger macro forces affect the availability and cost of capital that has begun to have major effects on the performance of real estate as an asset class.

“Interest rates have always played a role in commercial real estate valuations through their impact on cap rates,” says Peter Muoio, PhD, Chief Economist at Ten-X Commercial and former head of Deutsche Bank Real Estate Research. “However, research we have done over the years suggests two things. One, that on average, interest rates only explain about half the movement in cap rates and two, the magnitude of the impact shifts depending on the capital market and economic situation. For example, over recent years we have been fairly copacetic about the potential impact of rising interest rates in CRE valuations because cap spreads were historically very wide, leaving leeway for CRE specific variables to outweigh pure interest rate impacts. However, today, spreads are very narrow by historical standards, leaving cap rates and CRE valuations more vulnerable to pressure from broad interest rate moves.”

While we, and in turn our models, do a decent job of capturing these local factors, they haven’t changed much to account for the additional layers of risk presented by larger economic forces. And even if they did, how would we incorporate the larger macro and capital markets movements into analyzing a potential acquisition or loan? Even if we figured out how to incorporate them into our models, how do we make sense of the enormous amounts of data on economic and geopolitical events that could alter the course of the global economy in a way that allows us to understand the effect on the real estate market, especially a specific piece of real estate?

Excel models are great at calculating cash flows at a constant 3% or the sale price based on what cap rates may be ten years from now, but those are typically guesses we use to fill in blanks. Understanding larger economic patterns and how those patterns may affect the performance of real estate is crucial to developing more accurate long-term forecasts.

Flow of funds affects prices

I have a thesis that prices have been affected much more by the amount of funds flowing into the real estate industry than they have by changes in the fundamental performance of properties. Take the 10-year treasury rate and the weighted average cap rate for all CMBS loans (retail, multifamily, hospitality, and office) from January 1996 through December 2018. From January 1996 through December 2008, the average cap rate premium was roughly 61%, meaning if the 10-year treasury rate was 5.00% then the weighted average cap rate was 8.05%. From January 2009 through December 2018, however, the average premium was roughly 170%.

I believe this premium has led many fixed-income investors to put more money into the real estate sector to chase higher returns than the two to three percent offered by treasuries (and lower rates by other countries) over the past seven years. With more funding available and at lower rates than historically available, real estate has been an attractive asset class.

Tenant exposure to economic movements

Every tenant is part of an industry, whether that’s finance, law, technology, marketing, etc. Fundamental changes in the economy or an industry’s place within either the larger global/national or local economies could have a significant impact on how tenants within that industry perform over time. We sometimes account for this mentally, but we have no way of achieving deeper analyses on how a tenant or industry will be affected by larger economic movements.

The automation of the real estate valuation system

I’ve become a big fan of a concept called “systems thinking” over the last year or so. Entire textbooks have been written on the nuances and applications of systems thinking to real world problems, so please bear in mind that the following is an extremely high-level description of systems thinking and its potential for the real estate industry.

Systems thinking is a method of breaking down a whole system into its subsystems and then understanding the complexity of the dynamic relationships among the different subsystems. To put it another way and apply it to the topic of valuation, systems thinking would entail breaking down the valuation process into the parts that lead to the valuation, such as financial statements, rent rolls, market leasing and sales analysis, economic and demographic information, etc. Then each of those parts, or subsystems, would be further broken down.

Understanding the system, each of the subsystems, and the dynamics of the relationships among the subsystems allows you to identify what data is needed for each of those subsystems to work. What has become apparent is that many of the inputs we use for valuation models are still either qualitative or subjective. Qualitative in the sense that we have to convert factors such as the relative condition of the building, the view, access to transportation, etc. into numbers; does a better view for one apartment over another receive a $100 or $400 per month premium? The answer is: it depends. Subjective in the sense that we often have numerous factors to consider when deciding on an appropriate lease rate, cap rate, or discount rate. We make those adjustments mentally, but there’s really no formulaic way of adjusting for all the possible combinations: based on all the property-specific, market, and economic factors, should you value a property using a 5.0% or a 5.5% cap rate? Doesn’t seem like much of a difference, but using a $1,000,000 NOI, that .5% difference in cap rate is a roughly $1,800,000 difference in price.

Once the system has been sufficiently broken down and the relationships among the variables well understood, technology can be applied to automating and analyzing each of the subsystems. A better understanding of the system can also allow firms to figure out how to better collect information and include fields that capture some of the more qualitative aspects of how we value properties.

Automating the process

With a proper system designed, data identified, and methods in place to collect much more consistent and detailed data, technology can then be applied to better understand the enormous complexity of how the real estate market functions. Ultimately, rents, vacancy, interest rates on loans, cap rates, and required returns on properties are driven by either larger economic forces or a by a combination of property specific and larger economic factors. Our models, however, do a poor job of combining larger economic factors with our property specific models and it’s impossible for the human mind to consider and make sense of all the different drivers of properties and economies.

According to Timothy Savage, PhD, Professor of Real Estate at New York University’s Schack Institute of Real Estate and former Senior Managing Director of CBRE’s Econometric Advisors, “Capital markets are global, but real estate remains local. In gateway markets like New York and San Francisco, valuations will be increasingly driven by local economic conditions as the economics of agglomeration dominate macroeconomics. Proximity to the MTA will matter more than a 25 bps increase in the Fed Funds rate. Outside of gateway markets, automated valuations will be more common using advanced data analytics. In the near term, they will not replace the traditional broker’s opinion of value (BOV), but will provide benchmarks for BOVs.”

“At Ten-X Commercial,” says Muoio, “we see bidding activity throughout auctions for properties, so we get real-time information on not only the final high bid price but the number of bidders at different prices, the intensity of bidding, second-highest last bid, etc. This is all very valuable price information that can be brought to bear on valuation models beyond simply reported closed sale prices. From a theoretical perspective, one could also imagine that who the bidders are – international, opportunistic, owner/user, etc. – could factor into a property’s valuation since different bidders come with different risk preferences and time horizons. Since we are increasingly able to parse out the buyer universe and how it is shifting over time, we could be able to make valuation models more robust.”

Understanding these differences is exactly what technology can help us do well. It can help us identify and incorporate layers of data and information into the decision-making process. This would give us a better idea of what to consider and how those different factors could affect the performance of the investment. If we can effectively develop an understanding of how macroeconomic and capital markets issues affect different markets and different types of properties, our forecasts for future performance will likely change our perception of which markets and which properties would make the best long-term investments.

Valuation fallacies

While I think we can make tremendous improvements in our valuation methods using currently available systems and technologies, I’d like to address two ideas that I think are unrealistic. The first is the idea that publicly available data is sufficient to develop fully automated valuation systems. The second is that full automation of real estate valuation is highly unlikely, at least anytime soon.

There are many companies focused on collecting vast sources of publicly available information into a centralized storehouse where this data can be combined and analyzed. Some firms are using AI and ML to analyze disparate parts of urban and suburban areas to make better sense of how values are derived for both residential and commercial real estate

My opinion, however, is that while combining publicly available data and analyzing satellite images and coffee shops will help understand areas better, they will fall far short of the coveted “automated” claim when it comes to accurate valuations, which leads to the next point on full automation.

In regards to full automation, there are just too many nuances to the valuation process that we don’t have high quality data on. Any valuation model will need to give the user the ability to either add or modify inputs to refine valuations based on the more qualitative factors that must be considered.

Much of this article has touched only superficially on the concepts that need to be addressed in order to fully integrate technology into the real estate industry. Real estate can be hard and complex. Technology can be hard and complex. Integrating technology into the real estate industry (any industry, for that matter) is exponentially more complex. Despite many of the claims being made, achieving a high level of efficacy through technological transformation will not be easy, fast, or cheap. It hasn’t been in any other industry and it won’t be in real estate. Those who buy in to the easy, fast, cheap falsehood will likely spend quite a bit of money and time without achieving real results.

The end — the best place to start

How many of us truly understand all the different factors that will affect how we ultimately derive value from a property or investment at the end of the holding period? We’re intimately familiar with the numbers that we put into the model, but do we really understand what factors drive those numbers? And do we truly understand what drives the factors that drive those factors? We can go down many more layers into this process, but I think you get the point.

We don’t often find a hammer in the garage and ask ourselves what we can build with it. Rather we decide that we want to build something and then go into the garage to figure out what tools we’ll need to accomplish the task. Just like the hammer, technology itself won’t make the real estate industry much better. Technology offers many tools with enormous potential. And just like Eastwood used his Colt revolvers to do away with the old and bring in the new, only by developing a better structural model of how the industry works and what we want to achieve will technology be able to ride into town, guns blazing and establish a new order. [Propmodo]