Monte Carlo Simulation in Excel for Mac 2011

Monte Carlo simulation isn’t something that Excel for Mac does naturally, but as I’ll demonstrate in this post, it’s certainly doable. Using a series of VBA modules, if we take the time to set it all up once, you can use the Excel workbook we’ll create as a template for future simulation projects.

Assume you’re the managing partner of a law firm, and you’d like to know whether it would be profitable to bring in another first-year associate. Experience has taught you that evaluating a new attorney is ordinarily a two-year process, so your window of analysis will be two years.

You already know what the market will bear in terms of billing rates for first and second-year associates, so your decision is likely to turn on the difference between expected revenue and estimated overhead in relation to a first-year associate. Overhead is largely a fixed cost, but expected revenue is inherently variable. As it happens, expected revenue is primarily a function of the number of hours billed.

By virtue of the fact that you’re considering a headcount increase in the first place, we’ve established that you currently have more work than your present staff can accomplish working full-time. What’s uncertain is whether the additional work would amount to something like 4000 billable hours over the next two years. Consequently, we’ll simulate various totals of billable hours to study how these affect profitability and return on investment.

Setup in Excel

First, let’s identify and assign a value to each of the variables we need to consider. You know from experience that you’ll have to write off about 35 percent of the hours billed by a first or second year associate, so let’s identify a variable called Collectible Billings, and assign a value of 0.65.

You expect to bill the attorney’s time at $225 per hour for the first year, and $245 an hour for the second year; we’ll average the two and assign $235 as the expected Billing Rate.

We’ll need to include the applicable Risk-free interest Rate (RFR), which is typically the quoted rate on 90-day T-Bills. This is a basic measure of opportunity cost for any type of firm project, including a headcount increase. It also represents the estimated effect of inflation (if any). Since the RFR is unusually low at this writing, 0.1%, we’ll arbitrarily assign a more conservative rate of 2%. Note that if your firm already has an established hurdle rate for evaluating new projects (e.g., eight percent), you should use the hurdle rate here instead of the risk-free rate.

If you ultimately decided to defer hiring a new attorney for now, you estimate that hiring one later would take about 60 days. The economic value of avoiding this 60-day lag is called “Convenience Yield,” and we quantify it as 60/720, or approximately 0.08. This parameter is optional, and if you feel that including it would make your simulations too rosy, you can simply enter 1 in cell B5.

Our time increment is two years, or 2.

Next, we’ll assume that the firm requires associates to bill 2000 hours per year. The value in cell B7 is simply your annual billable hour quota times 2 years, or 4000. We’ll end up making this value dynamic, but we’ll use the firm quota as the basis for certain other estimates, as well as our starting estimate for number of hours billed.

Finally, we’ll estimate (reasonably, if arbitrarily) that Overhead (i.e., salary, benefits, administrative support, equipment, etc.) for an additional associate will amount to sixty percent of expected receivables, or Billing Rate × Expected Demand × 0.6.

Once you enter these variables, you should have a column of values that looks like this:

Starting Values in Excel

To keep things simple, we’ll hold all other variables constant and express our expected profit from hiring the associate purely as a function of hours billed (actual demand). Because we also expressed cost as a function of expected billable hours, this is not a trivializing assumption.

Monte Carlo Simulation with Excel and VBA

For purposes of simulation, we’ll model the number of hours billed in accordance with the Log Normal distribution. Since we know that there is a stochastic element to simulation, we’ll set that up next.

As I’ve noted in other posts, Excel’s random number generator is notoriously suspect, so we’ll make use of the excellent Mersenne Twister module available here. For my own convenience, I’ve made some minor changes to the original code to facilitate random seeding, and you can download my edited mersenne twister module here. N.B.: Excel for Windows users can and should make use of the free random number generating addin available here.

In what follows, I’ll assume that you’re using my modified mersenne twister module. Once you’ve copied the mersenne twister code, go into Excel and click on the Developer tab (click to enlarge).

Developer Tab Visible

(Here are instructions on how to make the Developer tab visible if it isn’t already). On the top menu bar, click Insert, then select Module. When the new blank module comes up, paste the mersenne twister code you copied. Next, for ease of reference I recommend that you go to View, then select the Properties window. There you can specify a unique name for the module.

We’ll also need to specify a random seed function. For this, my mersenne twister module uses the same VBA function that I provided in an earlier post. This I’ve included separately in a second module.

Finally, we’ll need VBA code to generate Lognormally-distributed random values. Here, we’ll use the random number generation code that I also set out in an earlier post. This I’ve included separately in the third module.

If we simulate values from a Log Normal distribution with mean = 0.05, and standard deviation = 0.1, we obtain a range of values with a histogram that looks like this (click to enlarge):

Random Values Histogram

We can then multiply these simulated values by the firm’s billable quota to obtain a series of simulated billable hour totals.

Going back to our Excel workbook, in cell D2 we’ll enter the following argument:

=dynrngLN(0.05,0.1)

Look in the lower right-hand corner of cell D2 and locate the fill handle. Click and drag the fill handle down to generate as many simulated values as you wish. Generally speaking, when it comes to simulation, the more the better. Nevertheless, I strongly advise you to set your calculation preference in Excel to manual (Excel > Preferences > Calculation > Manual); otherwise, Excel will go through the often time-consuming chore of calculation anytime you change any of the cells on that worksheet. On the manual setting, you can force calculation by pressing command-equal.

Also, if at any point you’re satisfied with the values you’ve simulated in column D, you can always copy them and use Paste Special > Values to make them constant. This will avoid the lengthy re-simulation process every time you recalculate.

Here, we’ll simulate only 100 values to keep things manageable, but you could easily use this same procedure to run 10,000 or more simulations.

Next, in cell E2 we’ll enter:

=$B$7*D2

then drag down so that columns D and E contain the same number of values. Don’t forget the dollar-signs in the above formula; that’s what tells Excel to use the same starting value for expected demand in every iteration.

After completing this step, you’ll have a list of simulated billable hour values in column E. Take a look at the list and see if it strikes you as realistic. For example, it may be unrealistic to expect a first-year associate to bill as little as 75% of quota (because they’d be fired before the end of year 1) or as much as 125% of quota (unless your firm is located in Manhattan). If you want to bring the range of simulated values closer to the 4000 hour estimate, you can adjust the standard deviation value in cell D2 downward from 0.1 to 0.05 or even lower (as long as the number is greater than zero). I’ve deliberately left the estimates somewhat rough in this example to better illustrate the range of possible outcomes.

The heart of the analysis is in column F. Before we begin, go ahead and format the cells in columns F and G to accommodate currency values, and specify that negative values are to be in red. You’ll see why in a moment.

In cell F2, you’ll want to enter the following argument:

=($B$2*$B$3*$B$5*E2)/(1+$B$4)^$B$6

then drag it down as before.

There’s a lot going on in this calculation, so I’ll break it down. Between the first set of parentheses, we’re multiplying the average billing rate times the estimated percentage that’s collectible, then times the convenience yield multiple, then times the simulated number of billable hours. This yields a gross estimate of revenue. We then discount the gross estimate by the opportunity cost, which is represented by the gross figure divided by the quantity 1 plus the risk-free rate, to the second power (since we’re using a 2 year window). This in turn yields a net simulation of revenue.

In column G, we’ll calculate the resulting income from each simulation. Insert the following code in cell G2, then drag down:

=F2-$B$8

Last, we want to calculate our return on investment for each simulation. Go ahead and format column H to contain percentage values, and specify two decimal places. Then in cell H2, enter the following argument:

=G2/$B$8

and drag down as before. Column H now gives us a range of simulated ROIs, which for most purposes is the number that the firm’s executive committee is really interested in.

columns_final

Bear in mind also that the ROI simulations in column H are somewhat conservative, in that we’ve already discounted revenue by the RFR (or whatever value you specified in cell B4).

To visualize the data, select columns E through H and then click on the Charts tab. Select Scatter Plots, and then Smooth. When Excel creates the chart, right-click on any of the markers showing Simulated Revenue, and choose Delete. Then right-click on any of the markers showing Simulated Income, and choose Delete. You’re now left with a chart that plots billed hours versus ROI, which should be the analysis you’re looking for. Since we’re using simulation, your chart will be slightly different, but the one I created looks like this (I’ve added axis labels) (click to enlarge):

ROI Graph

As you can see, break even is somewhere around 3500 hours for the two-year horizon, or about 1750 billable hours a year. It’s also easy to visualize the required number of billable hours for various returns on investment; simply choose an ROI percentage on the left, draw a straight line right until it intersects the curve, and then draw a straight line down to the number of billable hours.

In part 2 of this article, I’ll show you how to use the Solver add-in to calculate the exact number of billable hours that correspond with break-even, as well as any hurdle rate you choose.

Posted in Business, Excel | Leave a comment

Building a Quasi-Random Password Generator in Mathematica

Here’s a simple, down-and-dirty password generator in Mathematica. I wrote it in version 9, but it should be backward compatible to . . . I don’t know, way back. There’s nothing really fancy or artful here, it’s just something that I find useful.

You’ll note that the function takes two inputs; the first is a binary indicator of whether you want to include non-alphanumeric characters (0 for everything, or 1 for alphanumeric only), and the second input is the number of characters you want your password to contain.

cut[charlist_] := RotateLeft[charlist, RandomInteger[{10, 58}]]; shuffle[charlist_] := Flatten[Transpose[Partition[charlist, 34]], 1]; pwdFn[anonly_: 0, len_] := Module[{chars, charsX, nums, set, numcuts, inter, ret, fin}, charsX = {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "!", "@", "#", "$", "%", "^", "&", "*", "-", "_", "+"}; chars = Drop[charsX, -11]; nums = {"1", "2", "3", "4", "5", "6", "7", "8", "9", "0"}; set = If[anonly == 1, Join[chars, nums], Join[charsX, nums]]; numcuts = RandomInteger[{233, 6765}]; inter = Nest[shuffle[cut[#1]] &, set, numcuts]; ret = RandomChoice[inter, len]; fin = StringJoin@ret; Return[Style[fin, FontFamily -> "Courier", FontSize -> 14]]];

pwdFn[1, 16]

gYf2RcR96Z7NYgKZ

pwdFn[0, 16]

02ktc%bl*U6v@+GD

Posted in Mathematica | Leave a comment

Excel Random Numbers from Various Distributions

As a sort of companion to the most recent post dealing with alpha-Stable random values in Excel, here’s a VBA module that will generate quasi-random numbers from any of the following distributions:

  • Standard Normal(0, 1) distribution
  • Normal(μ, σ) distribution
  • Log-normal(λ, ρ) distribution
  • Chi-square(n) distribution
  • Student’s T(n) distribution
  • Weibull(η, σ) distribution
  • Pareto(α, κ) distribution
  • Exponential(α) distribution
  • Rayleigh(α) distribution
  • Cauchy(μ, γ) distribution

Note that you’ll need to choose a method of generating Uniform random variates. At the same time, be aware that Excel’s built-in quasi-random number generator, RAND(), is notoriously weak. I prefer to use the Mersenne Twister VBA module available here, and the random number module linked above incorporates the Mersenne Twister module’s nomenclature.

You’ll also need to apply your own random seeding algorithm. Below is a random seed function that utilizes your computer’s internal clock:

Public Function randseed()
' Generates a positive integer value
' from the Excel Date/Time reading

    Application.Volatile
    Application.ScreenUpdating = False
    With Application.WorksheetFunction
    
    Dim currtime As Date
    Dim stime As String
    Dim sval As Double
    Dim inter As Double
    currtime = Now
    stime = CStr(Now)
    sval = TimeValue(stime)
    inter = (sval * 10 ^ 11) / .RandBetween(1000, 10000)
    randseed = Round(inter, 0)
    End With
    Application.ScreenUpdating = True
    
End Function
Posted in Business, Excel, Finance & Investing | Comments Off

alpha-Stable Random Values in Excel

It was a good day all around when Microsoft brought back macros in Excel 2011 for Mac.

Here’s a VBA routine that will generate a 1-D array of pseudo-random numbers from the alpha-Stable distribution. Note that you’ll need to include this excellent Mersenne Twister module to generate the uniform random variates. Please further note that you’ll need to provide the parameter values for alpha, beta, gamma, and delta; this routine won’t fit data.

It’s a helpful tool if you’re noodling on Excel and just want to run some quick return scenarios on a particular asset.

Posted in Business, Excel, Finance & Investing | Leave a comment

Portfolio Simulation in Mathematica (edited for version 9)

Last year I posted a demonstration of copula-based portfolio simulation in Mathematica version 8. Since then, Mathematica has released version 9, which includes lots of new functionality that will be useful for investors and managers – particularly in the area of statistics.

Also as I noted in the earlier post, one problem with multivariate simulation in Mathematica is excessive computation time. Using explicit Mathematica functions, it can take an hour or more to produce a 10^4 matrix of simulated observations for a portfolio with fewer than a dozen assets. And since ten thousand observations is a bare minimum for serious analysis, Mathematica’s computation time is a barrier to use. Another purpose of this article is to present a neat workaround that returns a same-sized matrix of simulated data in just a few seconds.

While I’m tempted to take all the unchanged content from my earlier post and just incorporate it by reference in this article, I think readers using version 9 and later will find it more helpful to have all the information in one place. The earlier article is still a working, albeit less content-rich, example of portfolio simulation in Mathematica version 8, and it can still be utilized by those who have not yet updated to version 9.

As before, we simulate portfolio daily returns using a Student’s T copula. This time, though, we’ll be using a different set of daily stock prices, in part because I want to illustrate the ease with which Mathematica handles negative correlations in the context of multivariate distributions.

First the disclaimer: the following simulation and the particular stock price data referenced herein are purely illustrative. Absolutely no warranty or recommendation on the part of the author or site is intended, including no recommendation or opinion whatsoever with respect to the specific equities chosen or their suitability as an investment. Further, the Mathematica code herein is offered as-is, and with no warranty whatsoever, including warranties of merchantability or fitness. Use the following code at your own risk, but under no circumstances should any reader use the following methodology and/or Mathematica code for actual investing.

Data Import and Notional Portfolio

Assume we have a 5-asset portfolio composed of four small-cap equities with a recent history of high performance (EGHT, IOSP, MG, and MGAM), and as a simple hedge, we’ll arbitrarily choose ProShares Dow UltraShort (DXD) as our fifth asset.

Although Mathematica will fetch closing prices and a few select fundamentals with respect to a given equity, for purposes of this example it’s easier just to download the table of daily log-deltas and corresponding dates for the notional portfolio here, the characteristics of which can be seen below:

. empchart

cumretp
Parenthetically, I should add that the fundamental data available through Mathematica’s built-in FinancialData function are in my opinion much too limited to use Mathematica as a stand-alone application for managing a portfolio. No doubt this is deliberate in light of Wolfram’s recently introduced Finance Platform. Since at present the Finance Platform is available only for Windows, I have no experience with it.

Portfolio Descriptive Statistics

First we construct a function that will return a list of useful descriptive statistics for a corresponding array of daily returns data:

Clear[descripStat];descripStat[data_]:=Module[{drift,volatility,autocorr,medi,fin},
drift[x_List]:=Mean[x]*252;volatility[x_List]:=StandardDeviation[x]*Sqrt[252];
autocorr[x_List]:=Module[{xl=Drop[x,-1],x2=Drop[x,1]},Correlation[x1,x2];
medi={N[Length[data]],N[Variance[data]],N[StandardDeviation[data]],
N[MeanDeviation[data]],N[MedianDeviation[data]],N[QuartileDeviation[data]],
N[Mean[data]],N[HarmonicMean[data]],N[Median[data]],N[Skewness[data]],
N[QuartileSkewness[data]],N[Kurtosis[data]],N[Quantile[data,0.25]],
N[Quantile[data,0.75]],Max[data],Min[data],drift[data]//N,volatility[data]//N,
autocorr[data]//N};fin=TableForm[medi,TableHeadings={{"Length","Variance",
"Standard Deviation","Mean Deviation","Median Deviation",
"Quartile Deviation","Mean","Harmonic Mean","Median","Skewness",
"Quartile Skewness",
"Kurtosis Excess","Second Quartile", "Fourth Quartile","Maximum","Minimum","Drift","Volatility","Autocorrelation"}, {""}}];Return[fin]];

Because closing stock prices are coincidentally sometimes the same from one day to the next, and since a number of these statistical functions will throw a division by zero error, we need to change all of our zero-valued log-delta observations to a number that’s marginally different than zero:

xzero=g_:=RandomReal[{0.0000000001,0.00000001}]/;g==0;
  

Next we construct a table of descriptive statistics for each of the equities in the portfolio:

Clear[descripTbl];descripTbl=TableForm[Transpose[{descripStatShort[lndxd/.rul]
[[1]],descripStatShort[lneght/.rul][[1]],descripStatShort[lniosp/.rul][[1]],
descripStatShort[lnmg/.rul][[1]],descripStatShort[lnmgam/.rul][[1]]}],
TableHeadings->{{"Length","Variance","Standard Deviation", "Mean Deviation",
"Median Deviation","Quartile Deviation","Mean", "Harmonic Mean","Median","Skewness",
"Quartile Skewness", "Kurtosis Excess","Second Quartile","Fourth Quartile","Maximum",
"Minimum","Drift","Volatility","Autocorrelation","Cowles-Jones Statistic"},
{"DXD","EGHT","IOSP","MG","MGAM"}}]
  

descrip_Tbl

Conditioned VCV Matrix in Mathematica

Next, we need to generate a correlation matrix for the asset log-deltas, and from the standard correlation matrix, we will generate a conditioned VCV matrix using the following Mathematica code:

Clear[matrixV];matrixV[corrmat_]:=Module[{evcmat,evlmat,inter,ret},
evcmat=Eigenvectors[corrmat];evlmat=DiagonalMatrix[Eigenvalues[corrmat]];
inter=evcmat.MatrixPower[evlmat,1/2].Transpose[evcmat];
ret=SetPrecision[inter,8];Return[ret]];
  

vcvtbl_032013

For purposes of portfolio simulation, it’s the VCV matrix – not the standard correlation matrix – that will govern the simulated portfolio’s codependency.

Before we start working with the VCV matrix, we need to ensure that it’s positive definite. Mathematica internally uses Cholesky decomposition in several matrix functions, and an input matrix that is not positive definite will frequently cause Mathematica to throw an error. We can test for these matrix qualities we want like so:

vcvmat===Transpose[vcvmat]
PositiveDefiniteMatrixQ[vcvmat]
  

The latter is a new function introduced in version 9.

The price data we’ve collected starts with the first trading day in 2011. Based on that, it’s useful to construct a vector of mean returns for each asset:

Clear[muvec];muvec=Mean[lnall];
  

For purposes of illustration, we’ll assume that each of our holdings in the respective assets makes up one-fifth of total portfolio value. But in any case, total portfolio return is simply the dot-product of muvec and a vector of portfolio weights:

Clear[portret];portret=muvec.Table[1/5,{5}];
  

Then to obtain the mean annual return, we multiply portret by 252 (the usual number of trading days in a year).

Asset Marginal Distributions

It’s important to recognize that copula-based simulation will return values in the form of CDF probabilities, 1 > p > 0, for the specified multivariate distribution – not simulated observations, per se. Clearly, a 10^4 by 5 matrix of simulated Student’s T probabilities isn’t terribly useful for our purpose. The real power of copula-based simulation comes from applying the appropriate marginal distribution to the simulated CDF values for each asset.

As famously observed by Benoit Mandelbrot, daily changes in equity and commodity prices are not Normally distributed. In fact, they’re not even log-normally distributed. Accordingly, I typically model daily log-deltas in accordance with either the alpha-Stable distribution or the TsallisQ Gaussian distribution, whichever provides the better fit. Both of these distributions readily accommodate data sets that are both longer-tailed and higher-peaked than either the Normal or log-normal distributions:


compPDFs
The graphic above illustrates the comparative distribution PDFs with respect to EGHT’s daily log-returns.

For an n × 1 array of numerical values, Mathematica accomplishes distribution fitting with the following function:

stable=EstimatedDistribution[array,StableDistribution[1,α,β,δ,γ]]
tsallisq=EstimatedDistribution[array,TsallisQGaussianDistribution[μ,β],q]]
  

and thereafter we can test the returned distributions to see which is the better fit like so:

DistributionFitTest[array,#]&/@{stable,tsallisq}
  

For our test data, Mathematica returned the following best fits:

margDs

Student’s T Degrees of Freedom Parameter

A Student’s T copula function takes as inputs the multivariate correlation (VCV) matrix and the estimated degrees of freedom parameter, v (from the Student’s T distribution). Using maximum likelihood, we can estimate the appropriate value for v by maximizing the following function with respect to our VCV matrix and the variable, v:

Clear[tLLFn];tLLFn[v_,corrmat_]:=Module[{dim,ret},dim=Length[corrmat];
ret=-(Pi^(-dim/2)*v^(-1-dim/2)*Gamma[(dim+v)/2]*(dim+v*PolyGamma[0,v/2]
-v*PolyGamma[0,(dim+v)/2]))/(2*Sqrt[Det[corrmat]]*Gamma[v/2]);Return[ret]];
  

For the test data we’re using, Mathematica returns v = 9.

Ordinarily, we would take our VCV matrix and estimated value for v, and plug them into Mathematica’s built-in CopulaDistribution function. As I referenced above, however, the computation times for Monte Carlo simulation using this method are simply untenable. As a result, I’m particularly grateful to the member known as Sasha at mathematica.stackexchange.com for demonstrating how to use interpolation to derive the same simulation output in a fraction of the time.

To accomplish this, you can see from the linked page above that Sasha created a Fritsch-Carlson interpolation function, together with a grid system to hold the interpolated values. Obviously the grid Sasha designed can be adjusted for greater or lesser granularity, depending on user preference and purpose for the simulation. Thereafter, Sasha’s function applies a multivariate Student’s T distribution to the grid (which yields our simulated CDF values), and then it maps the respective marginal distribution to each asset’s CDF array.

The total elapsed time for Sasha’s operation is less than 5 seconds for a 10^4 × 5 simulation, as compared to 15 minutes or more using Mathematica’s explicit CopulaDistribution function.

Scrubbing the Simulated Data

One drawback of modeling with alpha-Stable and/or Tsallis Q Gaussian distributions is their infinite second moment (variance). In practice, it means that Monte Carlo simulation of these distributions will occasionally return values that are so far out of scale as to be unusable (in statistical terms, an “outlier”). Recalling from our descriptive statistics above that the maximum change in daily log-delta for any of the five equities was on the order of 0.4, you can see from the chart below that we did in fact generate some extreme observations:

rawsim

If we were simulating log-deltas for a single asset, the scrubbing process would be simple; we’d just remove any entry whose absolute value is greater than some assigned threshold. Since we’re simulating multivariate data, however, each 1 × 5 observation (or each day’s simulated return) is interdependent. Consequently, we need a way to test for multivariate outliers.

Mahalanobis Distance Test in Mathematica

It turns out that there are several ways to accomplish this, but the mathematically simplest is the Mahalanobis distance method. We can implement the method in Mathematica like so:

Clear[scrubCop];scrubCop[rvraw_,ci_:0.975]:= 
Module[{md,dims,x2,twov,twov2,ret},dims=
Length[Transpose[rvraw]];md=mDist[rvraw]; 
x2=Quantile[ChiSquareDistribution[dims],ci]; 
twov=MapThread[Prepend,{rvraw,md}]; 
twov2=Complement[twov,Select[twov,#[[1]]>x2&]]; 
ret=Map[Drop[#,{1}]&,twov2];Return[ret]];

After scrubbing, the box and whiskers chart below shows our plausibly distributed yet long-tailed simulations, free from contaminating multivariate outliers:

scrubdat

Posted in Finance & Investing, Mathematica | Leave a comment

Mathematica as Guitar Scratch Pad

Below is a block of Mathematica code that will generate and play the last few bars of Pat Metheny’s beautiful Solo from “More Travels”.  Just click the triangular play button in the bottom left-hand corner of the graphic once the .CDF application loads.  Typically, there is a 5-10 second delay before the sound begins.

Note that you will need to enable Javascript if it has been disabled. Please also note that some users have reported difficulty loading the application in Safari, but I’ve heard of no such difficulties with Firefox or Chrome.

Continue reading “Mathematica as Guitar Scratch Pad” »
Posted in Mathematica, Music | Leave a comment

Rationalizing Popular Outré or: Why Would Anyone Commission Frank Gehry?

It’s more important to be good than original.

—Mies Van Der Rohe


At the Intersection of Art and Architecture, Where Major Accidents Frequently Occur

Some topics actually beg to be addressed.

Those of us who live in free societies accept non-controversially that art is completely entitled to be provocative, even designedly and purposefully so.  Indeed, some legitimately contend that this is art’s primary function (I respectfully disagree, but I digress).

Architecture, at least public* architecture, must satisfy a different standard.

* I use the term “public” herein in a commercial sense to denote places of public accommodation, regardless of who owns them or how they were financed.  In legal terms, I mean structures that are or will be subject to the public accommodations provisions of the Americans With Disabilities Act.

To be sure, much of architecture can be appreciated in the same way as sculpture or other three-dimensional artistic media.  But the aegis is critically different; architecture must always facilitate.

That means aesthetics, no matter how inspired, are but a single component of architectural merit.  More to the point, it means aesthetics must never diminish the utility of public architecture.

At the very core of architecture lie constraints: fiscal, commercial, physical, legal, natural.  And since unrecognized or unheeded constraints are satisfied only accidentally, the first step in architectural creation is and must be analysis – not inspiration.

These dual notions of utility and constraints necessarily inform architectural criticism.  Good architecture satisfies constraints and reconciles aesthetics with utility.  Great architecture leverages constraints and integrates them with aesthetics to provide excess utility.

One important measure of architectural utility is sociological.  Great public architecture unifies and elevates a community, even as great art – no less importantly – often factionalizes.  I like the above quotation from Mies, in part because it encapsulates perfectly the architect’s practical imperative.  Architects are rightfully judged by their ability to envision and then deliver utility, and this is a chief distinction between architects and sculptors.

Which brings me to Frank Gehry, and specifically the Cleveland Clinic’s Lou Ruvo Center in Las Vegas.


nuvo_center

Monumental irony.  The Cleveland Clinic Lou Ruvo Center for Brain Health, designed by Frank Gehry – a building dedicated to the study and eradication of Alzheimer’s disease and other degenerative brain disorders.


Birds Gotta Fly, Fish Gotta Swim

Let me emphasize that I have no quarrel with Frank Gehry, per se.  As an architect, he makes no classical pretensions; he does what he does, and his design propensities are well known.  On the subject of Gehry’s design, I’d merely point out that Albert Hofmann never advocated a daily regimen of his famous discovery; he only claimed that it opened up some interesting possibilities.

In the same way, I’m libertarian enough not only to wish Frank Gehry and his associates well with respect to residential and private commissions, but to absolve him from guilt for accepting design commissions for public buildings.  After all, he has a business to run.

My real beef is with whoever handed the hot mic to Yoko – again, and I’m particularly interested in what would motivate someone to do it.


This Just In: Men Go Crazy in Congregations

By googling the terms “emperor’s new clothes” and “phenomenon,” I discovered the following paper written by three colleagues at Cornell and published in the American Journal of Sociology.  It concerns the sometimes inexplicable popularity of the unpopular; for example, how might one explain it if, over time, music critics began to praise recordings that featured gratuitous profanity spoken over cacophonous rhythms with no discernible melody?

According to the paper by Centola, Willer and Macy, this phenomenon cannot occur within a generalized and fully connected social network.  Popular unpopularity only takes root when agents within a developed local network create their own local preference and feedback loop for an unpopular norm, and thereafter export it to the generalized, connected social network.  To the extent that the generalized network attributes some special expertise to the local network, this of course greatly increases the probability of generalized acceptance.

If Centola et al. are correct, it means the AIA and the Pritzker Prize committee are as responsible as anyone for the Lou Ruvo Center and all of Gehry’s other aesthetic blasphemies.

It also means that the way to prevent future mis-coronation is for architects to connect regularly with people outside of the architecture community.  Actually show an architectural portfolio to people unaffiliated with architecture and ask them if it’s any good.  Listen to what they say and weigh their opinions; don’t automatically dismiss them as philistines if their opinions run counter to the architecture community’s prevailing assessment.  To charitable benefactors and governmental decision makers: trust your eyes.  Above all, don’t let an architect or architecture critic convince you that naked is clothed.


Posted in Architecture, Behavioral Economics, Pet Peeves | Leave a comment

The 5 Best Head Coaching Jobs in College Football

To mark the occasion of a rare but seemingly imminent vacancy, we list the top 5 head coaching positions in all of college football.

Andy Staples of Sports Illustrated compiled a somewhat similar list in an article last year, but that list didn’t take into account many of the all-important intangibles that sometimes mean the difference between satisfaction in a job, and surfing monster.com every night.

Once intangibles come into play, five jobs stand above the rest.

Winning tradition obviously matters, but no weight is given to current W-L record or even a school’s winning percentage trend.  The top 5 head coaching jobs would still be the top 5 even if some or all of the schools were winless right now.  That may seem antithetical, but it doesn’t necessarily follow that each of the five incumbents is doing a good job maximizing their opportunity.

Also, since no school is immune, I give no weight to the effect of NCAA sanctions, whether threatened or imposed.  The sole exception to this is Penn State, due to the uncertain long-term effects of truly draconian sanctions, combined with post-Sandusky fallout.  That’s a significant exception because Penn State would otherwise be a borderline top 5 job given the school’s historical recruiting dominance not only in Pennsylvania, but all along the Eastern seaboard from Maryland to Maine – something no other B1G school can boast.

On that same note, in assessing desirability we start from the premise that unless you’re Bill Snyder, better players make better coaches.  That means the most desirable head coaching jobs all enjoy inherent and sometimes unique advantages with respect to recruiting.

We also start from the idea that, all things being equal, the probability that a given high school player will sign with a particular school is most importantly a function of proximity.  Of course things are seldom equal, so the most desirable head coaching jobs are at schools that have a time-tested approach to overcoming resistance to distance.  Taken together, it means the most desirable head coaching jobs are in geographic areas replete with indigenous high school talent, but more narrowly, at schools that have also managed to create an inherent appeal to high school kids.

It turns out that the inherent appeal of a school can be predicted in accordance with certain characteristics.  In that vein, before we identify the top 5 head coaching gigs, it’s helpful to note what factors not only tie these jobs together, but also separate them from the rest of the pack:

  •     A solid base of wealthy, powerful, invested alumni
  •     A national brand, including iconic uniforms that change rarely, if ever
  •     Longstanding winning tradition
  •     Excellent athletic facilities
  •     Top-tier pay scales for the head coach and assistants
  •     A track record of putting players in the NFL (Mel Kiper, if awakened from REM sleep, could instantly recite the 3-deep roster for any of these 5 schools, and NFL scouts can find their way around all 5 campuses without a map)
  •     A fully rationalized and consistent recruiting pitch
  •     A fully developed self-identity (these schools know who they are and who they aren’t)
Not surprisingly, there aren’t many cons that go with the top 5 jobs, but one shared drawback is very high fanbase expectations.  Bowl eligibility for these teams is simply assumed.  Indeed, the more rational and moderate fans of these five schools want to see their team in a BCS bowl more years than not, and in the National Championship game at least once or twice a decade.

One final point before we get to the rankings.

Academics is a two-edged sword, regardless of where a school sits on the academic spectrum.  Universities with stronger-than-average academic reputations have a better chance of recruiting and successfully matriculating offensive linemen and the occasional skill position player with a 1400 SAT.  Yet those same universities may be somewhat disadvantaged with respect to kids who view college as nothing but an interim stop and an unavoidable obstacle to reaching the NFL (regardless of whether the player has a realistic chance of someday making an NFL roster).  Naturally, for universities with average to weaker-than-average academic reputations, the converse may be true.


1.    Southern Cal

Pros

Hollywood

Let’s face it.  An Atlanta kid can be holding scholarship offers from every SEC school and Florida State, but if he gets an offer from Southern Cal his eyes will light up.

Song Girls

Never mind that all of them are dating Newport Beach investment bankers, on TV they manage to look somehow attainable.

National Recruiting Appeal

Southern Cal is the only school whose recruiting pitch is as strong on the other side of the country as it is in their own backyard.

Private Institution

This is a seldom-mentioned but tremendously important advantage.  Any disclosure policies applicable to Southern Cal football are likely self-imposed.  Among lots of other things, that means unpleasant or potentially embarrassing discussions can take place behind firmly closed doors.  It also means Southern Cal is not beholden to the California legislature for funding, nor subject to the caprice of a UCLA-alumnus governor.

First Pick of California High School Players

Were it any other school, this would be listed at the top.

Weather

Arguably the best weather on the planet.

Cons

South Central Los Angeles

To clear up any misunderstanding, the USC campus isn’t actually in Hollywood.

Smaller, Less Engaged Fanbase

Except for the Lakers, Los Angeles isn’t much of a spectator sports town.

L.A. Coliseum

Southern Cal is the only school in top 5 without an on-campus football stadium.

Pacific Time Zone

It makes no discernible difference with respect to recruiting, but the East Coast will have gone to bed before Southern Cal’s night games are over.

2.    LSU

Pros

First Pick of Louisiana High School Players

With very few exceptions.  The significance of this is difficult to overstate.  Louisiana high schools play an outstanding brand of football; certainly on par with Texas and Florida, and maybe even better when you adjust for the smaller population.  In addition, Louisiana kids who hold a scholarship offer from LSU face significant pressure from the community to stay in-state.

Strong Recruiting All Along the Gulf Coast

The LSU program is known and respected by high school players and coaches from East Texas to Florida.  LSU takes a back seat to no one in terms of recruiting prowess in the South.

Louisiana Has Only One AQ School

Unlike Texas, Mississippi, Alabama, Georgia, Florida, Tennessee, etc.

Proximity to New Orleans

Any game at the Superdome – currently the site of the Sugar Bowl and 1 in 4 National Championship games – is essentially a home game.

Cons

Baton Rouge

Not much pizzazz, but again it’s driving distance to New Orleans.

SEC West Schedule

Brutal, and will remain so for the foreseeable future.

Weather

Very hot and very muggy.

3. (Tie) Texas

Pros

Wealth

In terms of endowment, Texas is the wealthiest university not named Harvard in the United States.

Longhorn Network

Mack Brown’s recent comments notwithstanding, the LHN represents a tremendous advantage.  It alone caused Texas A&M to jump, indignant, from the frying pan straight into the fire.

Dominant Institution in the Big 12

It’s hard to get anything done inside the Big 12 unless Texas goes along.

First Pick of Texas High School Players

With exceptions.  Some Texas kids grow up dreaming of playing for Oklahoma or Texas A&M.  A lesser number grow up dreaming of playing for Baylor.  And that’s about it.

Unparalleled Facilities

To characterize Texas’ facilities as “excellent” is an understatement.  For one thing, the Longhorns’ locker room at Darrell K. Royal Memorial Stadium is orders of magnitude nicer than the Dallas Cowboys’ locker room at Jerry World.

Austin

In fairness, the manifold attractions of Austin are mainly the kind of things that appeal to 30-something professionals.  It’s much less clear whether these are really difference-makers to high school kids.

Cons

Winning Isn’t Enough

Being the head coach at Texas is very much akin to holding state office, in that a mind-numbing litany of protocols and political obligations come with the job.  Further, the head coach’s ability to satisfy these is as determinative to his success as the team’s W-L record.

Self-Imposed Academic Restrictions

More than could be adequately covered in the general academics discussion above.  Without mentioning names, there have been any number of truly outstanding Texas high school players who Texas wouldn’t recruit due to the school’s own academic concerns.

Big 12 Uncertainty

Who knows what the future holds for the Big 12, but the conference certainly doesn’t feel stable.  Not like the B1G and SEC feel stable.

Goliath Perception Within Texas

Which, incidentally, doesn’t apply to Oklahoma or Texas A&M.  As Wilt Chamberlain famously lamented, “No one roots for Goliath.”  True that, regardless of how many University of Gath alumni there are in Dallas and Houston.

No Recruiting Footprint Outside of Texas

Other than marginal inroads into Arizona and Colorado, Texas has little experience with the unique challenges that accompany recruiting out-of-state.

Weather

Summer is ridiculously hot, to the point that kids visiting from Florida notice it.

3. (Tie) Oklahoma

Pros

Winning Is Enough

Listed first because this factor alone nullifies any advantage the Texas job might otherwise enjoy by virtue of the Longhorn Network or anything else.  When I say “winning is enough,” I mean that Bob Stoops (or whoever has the Oklahoma job) doesn’t always have to conduct himself like he’s running for re-election.  This is something that can only be fully appreciated after you’ve had to do things the other way.

First Pick of Oklahoma High School Players

As with Texas, there are always a few exceptions.  Oklahoma’s is not nearly as significant as LSU’s home-state recruiting advantage, but Oklahoma high schools turn out a strong group of football players every year.

Second Pick of Texas High School Players

Solidly second.  Oklahoma picks before Texas A&M or anyone else besides Texas.

Targeted Out-Of-State Recruiting

Besides the state of Texas, Oklahoma has made solid inroads into Las Vegas (including traditional powerhouse Bishop Gorman), and has also made inroads into California.

Cons

Dependence on Texas

The school and the state.  With respect to the state, Oklahoma simply can’t win consistently without Texas high school players.  With respect to the school, how dependent is it?  Put it this way: each of the other three schools (Oklahoma, Oklahoma State, and Texas Tech) that were invited to join the Pac-12 along with Texas would be clearly and demonstrably better off today if the proposed move had gone through.  Unfortunately for those schools, the Pac-12 wasn’t interested sans Texas.

Big 12 Uncertainty

See Texas.

Norman and Oklahoma City

The NBA Thunder notwithstanding, these towns have even less pizzazz than Baton Rouge.  They’re also more than a comfortable drive away from any large city.

Weather

See Texas.

5.  Notre Dame

Pros

Unmatched National Fan Base

Notre Dame’s strong contingent of alumni and subway alumni are everywhere.

Sui Generis

The exclusive deal with NBC is one example; Notre Dame’s BCS eligibility criteria is another.  Notre Dame is recognized as unique throughout the world of college football.  This of course greatly streamlines the process when Notre Dame inevitably requests special treatment.

Private Institution

See Southern Cal.

Chicago’s Team

With all due respect to the University of Illinois and Northwestern, Chicago’s college football team is Notre Dame.  Although Southern Cal comes closest, no other school dominates one of the ten largest U.S. cities in the same way.

Unmatched Tradition

Notre Dame has produced more Heisman trophy winners, by a comfortable margin, than any other school.  The Four Horsemen.  Knute Rockne.  Frank Leahy.  Ara Parseghian.  If kids today don’t recognize those names, that’s on us.

Cons

National Recruiting Imperative

It’s highly doubtful whether Notre Dame could win consistently with only Midwestern kids.

Self-Imposed Academic Restrictions

See Texas, but to a greater degree.

South Bend

Although it’s reasonably close to Chicago.

Weather

Three words: Cold. Snowy. April.

Best of the Rest

Each of the five head-coaching jobs above represent the whole enchilada.  All other head-coaching jobs involve more, and more significant, tradeoffs, which means personal preference takes a role in determining which head coaching job is better than some other.

Listed below in no particular order is a non-exclusive collection of next-tier head coaching jobs, with truncated pros and cons:

Stanford

Unparalleled academics, but only so many high school football players can score a 1400 on the SAT.  This makes it hard to build a quality starting lineup, let alone depth.  Great weather.  Difficult to win big without a Plunkett, Elway, or Luck at QB, but since 8-5 is considered a good season, a head coach doesn’t have to win big at Stanford to stay as long as he wants.

Michigan

As with Notre Dame, it’s doubtful whether Michigan can win consistently with only Midwestern players.  Other drawbacks include weather, as well as the B1G’s ongoing and protracted decline.

Ohio State

See Michigan.  Oh, all right.

As with Notre Dame, it’s doubtful whether Michigan Ohio State can win consistently with only Midwestern players.  Other drawbacks include weather, as well as the B1G’s ongoing and protracted decline.

Alabama, Florida, Georgia, Texas A&M

Intense competition for in-state and area recruits.  Brutal SEC schedule.

Florida State

Intense competition for in-state and area recruits.  The ACC is first and foremost a basketball conference.

Oregon

Impossible to win consistently with only Oregon high school players.  Phil Knight’s patronage is a big positive.  Oregon is unquestionably hot right now, but its winning tradition is scant, and Oregon emits an unmistakable flavor-of-the-month quality.  While Eugene is a nice college town, it is very remote.

Posted in Business, College Football, Sports | Leave a comment

Ode to a Chipotle Franchise

Why do I love thee, Chipotle Mexican Grill (CMG)?  Let me count the reasons:

  1. Low food cost.  In the restaurant game, alchemy is turning rice and beans into a $5 entree.  Chipotle has deliberately selected an inventory of prepared food items to combinatorically maximize its capacity to satisfy customer taste.  At the same time, almost everything it makes begins with ultra low-cost rice and beans.  Vegetarian?  No problem; the black beans aren’t seasoned with meat, and you can choose from any of the grilled vegetables, picos and salsas.  Brilliant, just brilliant.
  2. Customer-centric focus.  By now I suspect everybody knows that Chipotle will make whatever they have the ingredients for, whether it’s on the menu or not.  Never tried it?  Order barbacoa quesadillas next time.
  3. Made to order, fast.  Chipotle’s service and assembly line means you don’t have to eat something that’s been sitting under a heat lamp, and you don’t have to scrape off the raw onions (or anything else you don’t like) before you can eat.  At the same time, you don’t have to take a number or wait at a table for someone to bring it to you, either.
  4. Spare, post-modern decor.  I’ll own this as my own personal architectural preference, but it also helps to keep the finish-out cost down.
  5. Technologically current.  Order ahead with Chipotle’s iPhone app.
Given Chipotle’s value proposition and service model, it’s no wonder that CMG’s shares have increased in value by 20% since the first of this year, and 155% since January 1, 2010.

Parenthetically, since the topic is best practices with respect to fast food, I’d be remiss if I didn’t also commend Five Guys.  Who would have thought that a fanatical emphasis on freshness – and complementary peanuts – would get you SRO patronage with a menu limited to hamburgers, french fries, and soft drinks?


Posted in Business | Leave a comment

5 Things the Apple Store Gets Wrong

Earlier this month, Guy Kawasaki posted on his blog an article entitled “10 Things You Can Learn from the Apple Store.”  He then went on to praise the Apple retail model in ten different respects. Now Guy’s analysis is normally spot-on.  In this case, however, I’d disagree with Guy in that Apple Stores (at least the ones I’ve visited) are just as likely to do things wrong as right. Accordingly, here’s my list of 5 Things the Apple Store Gets Wrong, or, 5 Reasons Why I Avoid the Apple Store Whenever Possible.

  1. Queues aren’t inherently bad.  I’ve literally lost count of how many times I: (a) went into the Apple Store to buy something specific; (b) found it; and then (c) couldn’t find an employee who would take my money.  Sometimes I persisted until I found someone, and sometimes I put the item back and walked out of the store in disgust.  Look, I’m as right-brained as anybody, but queuing came about because it’s the most efficient way to process lots of transactions in one location.  Besides, it’s not like Ridley Scott will take his commercial back if Apple asks its customers to stand in line to pay.
  2. Creativity and chaos sometimes go together, but ultimately they’re two different things.  This relates back to number 1.  When I look at the layout of an Apple Store, unfortunately I don’t see Pablo Picasso, Richard Feynman, or Mies Van Der Rohe.  I don’t even see Andy Warhol.  What I see is chaos.  I see lots of employees talking and milling around, and none of them available – let alone eager – to help the customer who already knows what he wants and just wants to pay for it.
  3. Stop needlessly increasing labor cost by assigning an employee to stand at the front of the store and greet people. Especially if that employee doesn’t carry a credit card reader.  The Apple Store isn’t Wal-Mart, and Apple users are savvy enough to recognize a perfunctory greeting when they hear one.  This is a case where Apple can have its cake and eat it; take your greeters and put them behind a cash register.  Or at a minimum, give them a card reader and an unambiguous admonition to use it.
  4. If digital media that’s manufactured by Apple, Inc. isn’t available for download at apple.com, you’d damn sure better stock plenty of it at the Apple Store.  I was late to the party in upgrading from Leopard to Snow Leopard, and by the time I got around to it, Snow Leopard (i.e., Mac OS version 10.6) was no longer available for download.  No problem; I went to the Apple Store to pick it up.  Not only did the Apple Store not stock it, the Apple Store’s employee actually sent me home to order it by snail mail.  Come on, Apple.  This is digital media we’re talking about, not AppleTV.  If you don’t want to stock the physical discs, at least fix it so that Apple Store employees can burn the media to disc for the customer on request.
  5. Since Apple is apparently indifferent to sales per square foot, they should make the floor space 25% bigger.  Guy praises Apple in his article for avoiding clutter.  I guess it depends on how you define the term.  I’ll concede that Apple Stores don’t feature stacks of inventory everywhere you look.  But if the narrow aisles between display tables are so full of people that you can’t get to the item that you went there to buy, what difference does it make why?
Obviously, none of the foregoing detracts from the quality and superiority of the Mac OS.  But for all of Apple’s magnificence in other areas of its business, to contend that Apple’s haphazard retail model also represents best practice is a little rich.


Posted in Business, Pet Peeves | Leave a comment