Three Types of Companies: From Quantum Physics to Economics

Published on June 2016 | Categories: Types, Research, Business & Economics | Downloads: 53 | Comments: 0 | Views: 285
of 62
Download PDF   Embed   Report

It appears that simple mathematical laws describing the profits-revenues behavior for a company can be conceived by treating energy in physics as being equivalent to money in economics. This mathematical analogy permits an extension of Planck’s law from physics to economics with a reinterpretation of the significance of the various mathematical symbols. A careful study of the readily available profits-revenues data for several companies (some specific examples cited are Facebook, Google, ExxonMobil, Best Buy) indicates that all there are three types of companies, called Type I, Type II and Type III. All the three types of behaviors are indeed observed in the real world. The large majority of companies follow the linear law y = hx + c, a few follow the power-law, and there are a few recorded examples of public companies that seem to operate in the region to the right of the maximum point of the power-exponential law. Such companies are generally headed towards bankruptcy.

Comments

Content

Speculations on Types of Companies
From Quantum Physics to Economics
8 7

Linear law y = hx + c

Profits, y

6 5 4 3 2 1 0 0

Mount Profit
Power-law y = mxn

Power-exponential law y= mxnexp(-ax) Maximum point
5 10 15 20 25 30 35

Revenues, x Our First Glimpse of Mount Profit
Simple mathematical laws describing the profits-revenues behavior for a company can be conceived by treating energy in physics as equivalent to money in economics. This mathematical analogy permits an extension of Planck’s law from physics to economics. A careful study of the profits-revenues data indicates that all the three types of behaviors illustrated here, schematically, are indeed observed. The large majority of companies follow the linear law y = hx + c, a few follow the power-law, and there are a few recorded examples of public companies that seem to operate in the region to the right of the maximum point of the power-exponential law. Such companies are generally headed towards bankruptcy.
Page 1 of 62

Table of Contents
Topic No. 1. 2. 3. 4. Topic Summary Introduction The Case for Deducing Simple Mathematical laws Three Type of Companies: Linear Law e.g., Google, Facebook, ExxonMobil, Ford, Best Buy Is there a Mount Profit? Non-linear behavior: Power-law Acceleration and Deceleration of Profits growth Discussion: The Generalized Planck law Conclusions Appendix 1: Brief discussion of cost-cutting Appendix 2: Hubble’s 1929 paper on Hubble’s law and the expanding Universe and Big Bang Appendix 3: Facebook IPO debacle discussion Appendix 4: Quarterly Profits-Revenues data for Ford My Open Letter to some friends – and to all who may be interested – Let’s build a Profits Engine Page No. 3 4 5 10

5. 6.

26 28

7. 8. 8. 9. 10. 11. 12.

32 37 39 44 54 55 58

Page 2 of 62

§ 1. Summary
It appears that simple mathematical laws describing the profits-revenues behavior for a company can be conceived by treating energy in physics as being equivalent to money in economics. This mathematical analogy permits an extension of Planck’s law from physics to economics with a reinterpretation of the significance of the various mathematical symbols. A careful study of the readily available profits-revenues data for several companies both large and small indicates that:  There are three types of companies, called Type I, Type II and Type III, which follow the simple linear law, y = hx + c, depending on the values of h and c (positive or negative).  While the large majority of companies follow the linear law, y = hx + c, a few follow the simplest type of a nonlinear law, the power-law, y = mxn + c. This again yields two possibilities: decelerating profits growth with increasing revenues (n < 1) and accelerating profits growth with increasing revenues (n > 1). The third case of n = 1 yields the linear law. What is remarkable is that these purely mathematical speculations can be confirmed with examples of all the three types of linear behavior, and the two types of nonlinear behavior. All of these are indeed observed in the real world. This is discussed here by citing the financial data for Facebook, Google, ExxonMobil, Ford Motor Company, Best Buy and Universal Insurance Holdings, Inc. The last company in this list is currently ranked No. 2 in the Fortune Small Business 100. These observations lead us to believe that there must be at least a few recorded examples of public companies that operate in the region past the (i.e., to right of the) maximum point of the power-exponential law, i.e., where profits decrease even as revenues increase. Such a deceleration of profits will eventually become unsustainable and the companies will head towards bankruptcy. Finally, Mount Profit (the maximum point on the profits-revenue curve) we do not want to see. And, if we do see the peak of Mount Profit, we would be better advised to stay on that side of its peak where profits continue to rise.
Page 3 of 62

§ 2. Introduction
All these years, it seems like such a long, looonnnngggg time, I have resisted becoming a part of the modern social media trend and the “friend requests” that I get from, yes, sometimes genuine friends and family members too. Sometimes, I have even called myself “anti-Facebook” to describe my gut-resistance to these extended contacts with not just friends but their friends’ friends’ friends and God only knows who else. And, yes, I do personally know of a couple of tragic situations that evolved from such extended contacts (I don’t mean Facebook, specifically, but social media contacts, in general). Little did I know that Facebook would become the “poster child” for sharing what I have done here about the need for the application of more rigorous “scientific” methods to the analysis of the huge volumes of financial data that are being published now on a quarterly and annual basis for literally hundreds and thousands of companies worldwide. It all started when Facebook launched its IPO and its stock started heading south soon after trading began on Nasdaq last Friday May 18, 2012. And, then, for the first time, I started paying attention to Facebook as a company and what the Wall Streeters were “speculating” about. Now, I have even started wondering why we do not “Facebook” monthly, or even weekly, financial statements! May be Facebook Inc., can start a new trend in this as well and start releasing monthly financial statements. Then, I would have even more data to analyze and even more MRPs to calculate. And may be many more can merrily join this new “data mining” industry. Cheers! This brings us to an interesting point that we can learn from (or speculate about) the simple mathematical laws that I have been talking about in the two earlier documents on Facebook. We will call them Ref. [1] and Ref. [2] in what follows. 1. http://www.scribd.com/doc/94103265/The-FaceBook-Future The FaceBook Future Revenues-Profits Analysis, published May 19, 2012. 2. http://www.scribd.com/doc/94325593/The-Future-of-Facebook-I The Future of Facebook – I, published on May 21, 2012.
Page 4 of 62

§ 3. The Case for Deducing Simple Mathematical Laws
I have already discussed the significance of the elegant mathematical equation given below in the appendices to the two documents just listed. There I have also shown how to analyze financial data following methods that any good science or engineering major would be tempted to use instinctively. Indeed, the graph of the equation below (with b = - 1 and c = 0) describes what is known as the Cosmic Mircowave Background (CMB) Spectrum, the radiation left from the Big Bang that created the Universe. y = mxn [e-ax/(1 + be-ax) ] + c We will now discuss how a number of special cases of this general equation can be used to describe the workings of the financial world and the economy as a whole.

see http://en.wikipedia.org/wiki/Cosmic_Background_Explorer , http://www.scholarpedia.org/article/Cosmic_background_explorer http://cosmos.lbl.gov/cobehome.html
Page 5 of 62

In fact, in my doctoral thesis work, when we started analyzing my experimental results (very briefly, we built a special instrument, similar to a parallel-plate viscometer, to determine the “viscosity” of a new type of partially solid metal alloy, with a novel microstructure, first produced at MIT), we first used a simple linear law to explain our data. The linear law of interest was Newton’s law for the viscosity of a fluid, which is usually written as y = μx, where the constant “μ” is the viscosity and x and y are the two quantities (shear rate, x, and shear stress, y) that we measure in the experiment. There seemed to be an unmistakable deviation from the linear law. This led us to the non-linear law, or the power-law, which became the hallmark of the mathematical analysis of the experimental data presented in my doctoral thesis, see V. Laxmanan and M. C. Flemings, Metallurgical and Materials Transactions A Volume 11, Number 12 (1980), 1927-1937, DOI: 10.1007/ BF02655112 http://www.springerlink.com/content/45uj4gh37051172r/). The power-law is written as y = mxn where “n” is the power-law index. This is a special case of the more general law for CMB Spectrum (a = b = 0 and c = 0). For n =1 we get the linear law y = mx, or y = μx. We use different symbols for the constants since the numerical value of the constants change when we go from one type of behavior to the other. More generally, the power law is y = mxn + c, where c is the nonzero intercept made by the “curve” on the y-axis. In the real world, when x = 0, y is NOT always equal to zero. A nonzero constant must often be added as also recognized by Planck himself to develop his expression for the entropy S; see discussion in § 7, Generalized Radiation Law and Appendix 1. There are many such examples of laws with non-zero intercepts (see Ref.[1]), the most common ones being: the hydrostatic law, which describes how pressure varies with depth below the free surface of water (as in a swimming pool, a huge lake, or the ocean), Charles’ law describing how the volume of a gas increases with temperature, and Einstein’s law which describes the photoelectric effect. The modern “photocells” that we use widely work on the photoelectric principle.

Page 6 of 62

As we have discussed already, in the world of finance, or economics, the nonzero intercept is related to “Costs” in the simple equation 1 given below in words and then restated using math symbols. Profits = Revenues – Costs P=R–C P/R = 1 – (C/R) …………(1) …………(2) …………(3)

Since the costs C is inherently nonzero (see appendix 1 for a brief aside about how “costs” affect us all), the ratio profits/margins, P/R, can either increase or decrease as revenues increase. Because of the nonzero C, a doubling of the revenues, will NOT lead to a doubling of the profits. But, this is exactly what we implicitly assume when we use the profit margin P/R to compare different companies, within a sector of the economy, or even across sectors. The higher the profit margin, the better the company. This also means that, logically speaking, that we expect a doubling of the revenues to double the profits. But, we all know that this is NOT true. We have actually witnessed companies like GM, Ford, GE, Exxon Mobil, Microsoft, Walmart, grow and mature into huge companies reporting hundreds of billions of dollars in revenues and billions of dollars in profits, but never once has anyone bothered to check if a doubling of the revenues did indeed lead to a doubling of the profits! Or, ask, why it did not, if it did not. Consider the data for Ford Motor Company for the period 2000-2011 (see Table 4 in a later section where the Ford data is discussed). In 2009, Ford reported profits of $2.72 billion with revenues of $116.3 billion. In 2010, the profits had more than doubled to $6.6 billion with revenues of $128.95 billion, with an increase of only about 11% in revenues. Or, explain the tripling of profits from $6.6 billion in 2010 to $20.21 billion in 2011, with revenues going up by only about 5%. Profits increased by $13.65 billion with revenues only going up by $7.31 billion.
Page 7 of 62

How do we explain this behavior using the y/x ratio as one of the leading measures of performance and profitability? Yet, we continue to use the simple profit margin y/x, or P/R, blithely, every day to analyze the financial performance of companies. And, we use the ubiquitous EPS which is also a simple ratio, just like the profit margin (PM). And, very soon, companies, both good and bad, are driven out of business since they cannot meet Wall Street expectations about the continuous increases in the EPS and the PM that is demanded by Wall Street analysts, along with increasing revenues. These are unreasonable expectations. Let me repeat what I said in Ref. [2]. It is not all just Wall Street “greed”. It is also a lot of Wall Street “stupidity” since there are unappreciated laws that seem to dictate how a company will grow and mature. This can only be understood if we shed the current obsession with “ratio analysis” and start using rigorous scientific and mathematically sound approaches to analyzing the large masses of business data that are being compiled each quarter, summarized in annual reports, and in many special reports at the end of each decade or every 15 years, or 25 years, or 50 years, or 75 years, and so on. To me, as a R & D professional, who had spent all of his time at some of the world’s topmost research institutions, this seemed like an area that was ripe for research. It seemed like the world of finance and economics was operating like astronomy or physics before the advent of Galileo, Kepler, and Newton. Just take a look at Hubble’s law and the raw data on the speed (V) and the distances of galaxies (D) that Hubble reported (after years of painstaking and meticulous observations) in his 1929 paper which led to his famous law, see http://www.mpa-garching.mpg.de/~lxl/personal/images/science/hub_1929.html Since, this is so important to the understanding of why finance, business, and economics majors must take the whole idea of discovering “mathematical laws” very seriously, I have actually reproduced the entire 1929 paper by Hubble as Appendix 2 and have also replotted the data from Tables 1 and 2 of Hubble’s famous paper, for convenience and clarity. Please do take a look.

Page 8 of 62

Does the astronomical data warrant such a conclusion, as far-reaching as Hubble’s, which eventually led to the idea of an expanding universe and the Big Bang theory of creation? No finance major will dare to, or even dream about, proposing such a “law”, based on the “quality” of the “raw data” that Hubble presented in 1929. But, Hubble did make such a conjecture and even Einstein accepted it and later called his introduction of the cosmological constant (into general relativity, this was before the publication of Hubble’s paper in 1929) the biggest “blunder” of his life. Let’s summarize here some facts to ponder.  Hubble’s observations were limited to astronomical distances of just a few megaparsecs (Mpc) and maximum galaxy velocities of about 1000 to 2000 kilometers per second (km/s).  The slope of the graph, now called the Hubble constant H0, that he deduced was 500. This has now been revised and is believed to be in the range of 60 to 80 based on observations of galaxies that are several thousands of Mpc away and moving at far higher velocities than those reported by Hubble. The speed of light is 300,000 km/s (186,000 miles per second). This is how science progresses and astronomy, in particular, has progressed since the days of Kepler and Hubble. In fact, Hubble never received the Nobel Prize since, not so long ago back in the 1930s, “astronomy” was not considered to be a discipline worthy of such a high honor. Since then, at least two Nobel Prizes have been awarded for discoveries that confirmed the Big Bang theory and its implications (example, the discovery the cosmic microwave background radiation, the 2006 Nobel Prize for Smoots and Mather for the “discovery of the blackbody form and anisotropy of the cosmic microwave background radiation”. http://www.msnbc.msn.com/id/15113168/ns/technology_and_sciencescience/t/americans-win-nobel-big-bang-study/#.T753dcUxeQ4 and the earlier 1978 Nobel Prize for Penzias and Wilson for the discovery of CMB radiation http://www.bell-labs.com/user/apenzias/nobel.html and http://www.nobelprize.org/nobel_prizes/physics/laureates/1978/ ).

Page 9 of 62

With this general background, let us now consider at least three different types of companies that one can envision, theoretically, based on this simple mathematical law y = hx + c which is a restatement of P = R – C.

§ 4. Three Types of Companies
Implications of the Linear Law
The simple linear law, y = hx + c, suggests at least three different types of companies. As revenues x increase, we expect profits y to increase. How does a company grow or mature? As revenues increase, in most cases (I have literally studied hundreds of companies since 1998 when I first got interested in this topic), we find that the profits generally increase at a fixed rate. The slope h = ∆y/∆x is fixed and can be easily deduced for each company based on observations over a couple of years (say 10 quarters, as with Facebook, Inc.) This can be used to estimate the increase in profits ∆y = h∆x as revenues increase by the amount ∆x. There is no need to “speculate” about earnings. This should be a thing of the past.

Type I behavior: h > 0 and c < 0
Positive h and negative intercept c y = hx + c = h(x – x0) with x0 > 0
The interesting feature of the Type I company is the negative intercept c on the profits-axis, see Figure 1. This means that there is a “cut-off” revenue x = x0 below which the company reports a loss. This is the case with Facebook, and also Google and ExxonMobil, three examples that we will consider here. In the case of Facebook, an actual loss was reported in 2007 and 2008 when revenues were low. As revenues increased, losses decreased and eventually Facebook “broke through” and started reporting a profit starting in 2009.
Page 10 of 62

As discussed in Ref. [1], this is related to the classical “breakeven” analysis for profitability. The constant c = - a, where “a” is the fixed cost in the equation Total Costs = Fixed Costs + Variable Costs = a + bN where “N” denotes the number of units of a product that must be sold to generate the revenues R. If k is unit price, the total revenues R = kN and hence the profits P = R – C = kN – (a + bN) = (k – b)N – a …………(4)

This basic relation can be rewritten as P = [(k – b)/k] R – a since units N = R/k. This yields the linear law between profits and revenues which can be shown to apply to literally hundreds of companies. (Just prepare the graph for your company of interest and convince yourselves!)

Profits (Net Income), y, $ millions

14000 12000

10000
8000 6000 4000 2000 0 -2000 0 10000 20000 30000 40000 50000

Type I behavior Google, Inc. (2001-2011) y = 0.274x – 135.5 = 0.274 (x – 494.35)

Annual Revenues, x, $ millions
Figure 1: A good modern example of Type I behavior is Google, Inc. as illustrated here. The data for the year 2008 was not included in the linear regression. A closer look shows that between 2001 and 2005, Google revealed a significant
Page 11 of 62

acceleration in the profits-revenues growth (n > 1) but this was quickly followed by a deceleration. The overall long term trend is the linear law, as shown here. Notice that revenues have more than doubled since the year ending 2007, from $16.6 billion to $ 37.9 billion. What about the profits? Amazingly, Google’s profits have also more than doubled, from $4.2 billion to $9.74 billion. However, the law describing the profits-revenues data is y = hx + c NOT y/x = constant. As x increases, the ratio c/x becomes smaller and smaller and y/x ≈ slope h. Notice that 37.9/16.6 = 2.283 and 9.74/4.2 = 2.319 and there is an almost but an EXACT doubling of profits with a doubling of revenues. ∆x = $21.3 billion and ∆y = $5.54 billion. If Google’s revenues double again will we see profits doubling again?

Profits (Net Income), y, $ millions

1200 1000 800 600 400 200 0 -200 -400 0 500 1000 1500 2000 2500 3000 3500 4000 4500

Type II behavior y = 0.263x + 24.82 = 0.263 ( x + 94.45) x0 = - 94.45

Type I behavior Y = 0.588x – 288 = 0.588 (x – 388) X0 = 388

Annual Revenues, x, $ millions
Figure 2: The annual revenues and profits data for Facebook Inc. is replotted here to illustrate the fundamental reason for the apparently misleading positive intercept in the linear law observed when we analyze the profits-revenues data for mature companies. As shown here, the Facebook data can be described using two linear segments with two different values of the slope h. As revenues increase, the
Page 12 of 62

slope decreases and the intercept changes from a negative value (due to the fixed costs) to a positive value. This does NOT mean the company has a negative fixed cost. Rather it means that the company has matured and is now currently operating with a lower MRP (lower slope h).

Type II: h > 0 and c > 0
Positive h, positive intercept c, y = h(x – x0) with x0 < 0
As a company grows and its revenues increase, there is a significant change in the slope h of the graph of profits versus revenues, as we have seen with Facebook. Profits again increase, at a fixed rate h, but at a much lower rate. The slope h is positive but is significantly lower than the slope for the Type I company. This has been demonstrated with Facebook in Ref. [1] above. Only the graph is reproduced in Figure 2 here, for convenience, without further discussion.

Profits (Net Income), y, $ millions

70,000 60,000 50,000 40,000 30,000 20,000 10,000 0 0 100,000 200,000 300,000 400,000 500,000 600,000 700,000

Type I Behavior y = 0.112x – 8948 = 0.112 (x – 79,662) ExxonMobil (2004-2011)

Annual Revenues, x, $ millions
Page 13 of 62

Figure 3: The annual revenues and profits data for ExxonMobil for the period 2004-2011, obtained from their quarterly reports. At its peak, in 2008, the company reported revenues of $477.3 billion and profits of $45.22 billion. The revenues were higher in 2011, being $486.4 billion but profits were lower, only $41 billion. Nonetheless, the graphical representation here shows that the simple linear law y = hx + c holds and these differences are essentially small fluctuations that are not statistically significant. The company exhibits what has been called Type I behavior, although it clearly operates are revenue levels well above the “cut-off” or “breakeven” value. This is given by the positive intercept on the xaxis and equals x0 = - c/h = $79,662 million. If these trends continue (and they will if energy prices keep rising), the company should report revenues of $1 trillion and profits in excess of $100 billion in the next decade. An exactly similar upward sloping linear trend (i.e., Type I behavior) is revealed if we consider quarterly data (24 quarters were examined) for ExxonMobil. The same trend is revealed if we consider the increase in revenues and profits during the course of a single year by considering the cumulative revenues and profits for 3, 6, 9, and 12 months. The linear behavior observed here, with a “highly mature” compare like ExxonMobil, and relatively young, but mature, company like Google, also emphasizes the need to exercise extreme caution in applying non-linear laws to analyze the finances of a young and emerging company like Facebook. While nonlinear laws may seem to be intellectually and mathematically appealing, their hasty application is fraught with dangers and the “unrealistic” predictions can actually destroy emerging companies like Facebook. Google revelaed a period of highly accelerated profits before settling down to the overall linear trend revealed in Figure 1. Again, the linear law implies that a fixed ∆x always yields a fixed ∆y and ∆y = h∆x, always. Predicting the future profits thus becomes a simple as predicting the future position of a car going on a highway at a fixed speed v, as discussed in Ref. [2] above. The future location, or the additional distance traveled, ∆s, after some additional time ∆t, is always given by the equation ∆s = v∆t. The slope h, or the
Page 14 of 62

marginal rate of increase of profit (MRP for convenience) is like the speed v of moving vehicle. The Type I and Type II companies differ only because of the magnitude of the slope h which affects the numerical value of the constant “c”. Note that we are actually dealing with “local” values of the constants h and c, in profits-revenues space. These values apply over a (limited) range of revenues and profits, much like the situation with Hubble’s constant. The exact value of the Hubble constant H0 depends on the distances and the velocities that are being considered in preparing the V-D plot (now called the Hubble diagram). One likes to think that there is a single value of the Hubble constant that applies to ALL galaxies at all distances and that the universe is and has been expanding uniformly since it was created in a BIG BANG. Yes, some astrophysicists believe this. Others do not. Yet, they all believe in Hubble’s law and the relation V = H0 D even if the numerical value of H0 is still a matter of debate. In fact, more recent observations with the Hubble Space Telescope (HST) are leading to a re-examination of the all idea of “uniform” expansion and if there is actually an “acceleration” and if so why and what that all means as far as Einstein’s famous “blunder”. This is the beauty of astrophysics. Why can’t we start thinking of growth and maturing of companies with increasing revenues and profits in the same way? Let’s start thinking about revenues which range from a few million to several hundreds of billions much like the vast distance scales that we encounter in the Hubble diagram. Profits are just like the velocities of the galaxies. Like astronomers, we can make accurate “observations” on a number of companies. These are just like the galaxies that interest an astronomer. Here “Accurate observations” means HONEST reporting of financial data consistent with all rules and regulations. Only then can we deduce “laws” of far-reaching consequence. Recall also what Johannes Kepler was able to do with Tycho Brahe’s observations on the motion of Mars. This has already been discussed in Ref.[1]. As demonstrated here, the linear law seems to be the most widely observed. What remains then is the accurate determination of the slope h, or the MRP.
Page 15 of 62

Type III: h < 0 and c > 0
Negative h and large positive intercept c
Yet another possibility is what we will refer to as Type III behavior. At this point it is being suggested as a hypothetical idea. As revenues increase, is it conceivable that profits actually decrease? Can a company actually operate for a few quarters, or even a few years, with expanding revenues and decreasing profits? This is illustrated schematically in Figure 4.
1200

1000

Profits, y [$, millions]

800 600 400 200 0 -200 -400 0 500 1000 1500 2000

Revenues, x [$, millions]
Figure 4: Hypothetical Type III behavior. The two “diamonds” are supposed to represent actual data points. Profits decrease with increasing revenues between these two quarters (need not be consecutive). But, the company ignores this observation. The operations continue and soon the company is reporting losses as shown by the red “dots”. The company becomes unsustainable after it actually reports losses for two consecutive quarters and the negative trend becomes established. The company is then forced into bankruptcy (and reemerges – like the new GM today – corporations can actually become immortal!)
Page 16 of 62

Table 1: Hypothetical Type III behavior
Quarter Revenues, x $, millions Profits, y $, millions Profit Margin, y/x

950 300 0.316 Q42011 1060 205 0.193 Q12012 1130 145 0.128 Q22012 1250 41 0.033 Q32012 1340 -37 -0.027 Q42012 1500 -175 -0.117 Q12013 The mathematical equation describing this behavior is y = -0. 864x + 1120.46

Table 2: Annual and Quarterly data for Best Buy Potential Type III behavior
Year/Quarter end date Revenues, x $, millions Annual 35934 40,023 45,015 50,705 49,694 49,747 Profits, y S, millions Annual 1377 1407 1003 -1231 1317 1277 Revenues, x $, millions Quarterly 12,899 13,418 14,724 16,630 16,083 16,553 Profits, y $, millions Quarterly 763 737 570 -1698 651 779

3-Mar-07 1-Mar-08 28-Feb-09 3-Mar-12 27-Feb-10 26-Feb-11

We see some glimpses of this Type III behavior with Best Buy, which has been struggling to maintain its viability. The annual profits-revenues data starting with fiscal year ending March 2007 to 2012 is given below and also the quarterly data. These suggest the pattern of decreasing profits with increasing revenues and then a sudden plunge into a loss for 2012. Of course, we are overlooking the data for two intervening 2010 and 2011 in this assessment. Nonetheless, Type III trend is also evident if we compare annual data for 2010 and 2011. The higher annual revenue
Page 17 of 62

for 2011 yielded lower profits although this trend is bucked in the quarterly data, see http://phx.corporate-ir.net/phoenix.zhtml?c=83192&p=quarterlyearnings
2000 1500

Profits, y [$, millions]

1000 500 0

Best Buy( 2007-2012)
-500 -1000 -1500 0 10,000 20,000 30,000 40,000 50,000 60,000

Revenues, x [$, millions]
Figure 5: The annual profits-revenue data for BEST BUY, Inc. illustrating what looks like Type III behavior. Profits have been decreasing with increasing revenues. The two red squares are the data for the two years for which this overall trend is not observed. In my earlier analyses (prior to 2006), I had observed that companies like GM, and even Ford, after they embarked on their massive cost-cutting programs (and even started spinning off their parts making operations, Delphi was cast off by GM and Ford soon followed with its own version called Visteon) were briefly in the Type III mode. Unfortunately, I have NOT been able to retrieve the data from my old Exccel files (four laptops ago, still searching!). One cannot go back to the GM website to find this historical financial data. Very large, and once profitable, companies (like GM) have “credit” and can continue to operate for a few quarters, or even a few years, with decreasing profits (hence negative h and a large positive
Page 18 of 62

c). However, this is NOT sustainable and the company will eventually be forced into bankruptcy, as it happened with GM by June 2009. Indeed, such theoretical “speculations” regarding Type III mode, and the simple analysis being proposed, can be a useful tool for assessing future survivability of a company. There are surely many other examples of such very “mature” or “old” companies, with unsustainable legacy costs, that went into bankruptcy after being on the verge of extinction for a few years. It is to be hoped that Best Buy is able to overcome its Type III syndrome and return to profitability. (Profit Margins are noticeably low and rarely exceeded 2% on an annual basis, even when profitable.) Another example of what more definitely appears like Type III behavior is seen with Universal Insurance Holding Inc. This is a public company (Stock Symbol AMEX: UVE) trading at $3.55 as of Friday May 25, 2012. The company has reported a healthy profit for the period 2008-2011 and is ranked No. 2 in the Fortune Small Business (FSB) 100, see FSB 100 http://money.cnn.com/magazines/fsb/fsb100/2009/full_list/index.html http://money.cnn.com/magazines/fsb/fsb100/2009/snapshots/2.html

America's fastest-growing small public companies
2. Universal Insurance Holdings
Rank: 2 (Previous rank: N.A.) CEO: Bradley Meier Headquarters: Fort Lauderdale, FL Employees: 182 Industry: Insurance Revenue: $182.7 million (four quarters to 12/31/08) Return to investors: 56.33% (three years to 12/81/08, annualized rate)

Through various subsidiaries, Universal provides insurance services throughout the state of Florida, including all aspects of insurance underwriting, distribution and claims processing. Its primary business is homeowner insurance, including covering about 360,000 Florida homeowners against hurricane damage. Its subsidiary Universal Property and Casualty Insurance Company recently secured approval to write insurance policies in North Carolina, Georgia, and Hawaii.
Page 19 of 62

Table 3: Financial Data for UVE, Rank 2 in FSB 100
Year Revenues, x Net Income, y Profit $ millions $ millions Margin, y/x 189 183 211 239 226 54.0 40.0 28.8 37.0 20.1 0.286 0.219 0.137 0.154 0.088 Shares For EPS millions 35.6 37.4 37.6 39.1 39.2 EPS $1.52 $0.99 $0.71 $0.92 $0.50

2007 2008 2009 2010 2011

Revenues in the above table represents premiums earned plus other operating income. This annual data is plotted in Figures 6a. Notice that revenues increased from $183 million in 2008 to $226 million in 2011 but profits were one-half, dropping from $40.0 million to $20.1 million. The earning per share (EPS) also decreased, falling from $0.99 to $0.50.
140

120

Profits, y [$, millions]

100 80
60 40 20 0 -20 -40 -60 0 50 100 150

Universal Insurance Holdings, Inc. FSB 100 Rank 2 in 2012 Potential Type III behavior

200

250

300

350

Revenues, x [$, millions]
Figure 6a: Potential Type III behavior. The profits decreased by exactly one-half with increasing revenues between 2008 and 2011. The data for 2008 and 2010 fall
Page 20 of 62

on either side of the straight line connecting the data for 2007 and 2011. The arrow indicates (at $247.8 million) the revenues beyond which the company will start reporting a loss, if this annual trend continues. The analysis of the quarterly data (for the ten consecutive quarters ending with Q12012, however, shows Type I behavior, see Figure 6b. Nonetheless the implications of the Type III behavior evident with the 5-year annual data needs more careful study and efforts must be made to address the cost structure to increase profits further.
25,000 20,000

Profits, y [$, thousands]

15,000 10,000 5,000 0 -5,000 -10,000 -15,000 0 20,000 40,000 60,000 80,000 100,000

Universal Insurance Holdings, Inc. Quarterly data Q42009 to Q12012

Revenues, x [$, thousands]
Figure 6b: Type I behavior is evident with the quarterly data. There is clearly a lot more scatter in this data, with loss being reported for two of the ten consecutive quarters that were considered. To investigate this carefully, a graph of costs C versus revenues R was first prepared with Costs = Revenues – Profits. If profits are negative, Costs will exceed Revenues. There is much less scatter evident in the C versus R plot and the equation of the best-fit line is C = 0.6646R + 13039.3. This gives the profits-revenues equation as P = 0.335R – 13039.3. This is superimposed

Page 21 of 62

on the graph in Figure 6b. This method of deducing C-R equation first to deduce the P-R equation is useful when the profits-revenue graph is difficult to interpret. In this context, it is also of interest to review the financial data for Ford Motor Company, for the twelve-year period 2000-2011. This is summarized in Table 4. Ford is emerging as a stronger company after all of its turmoils of the past decade. The years 2002 and 2003 seem to have been the watershed years with Ford barely reporting a profit. This implies that Ford was operating close to its “breakeven” point, with all the revenues going to meet all of its total obligations, or “cost”, or the “effective cost”. Profits = Revenues - Costs and profits had all but vanished.
250

Ford (2000-2011) Costs, z = (x – y), $ billions
200

150

100

50

Demarcation line Revenues = Costs Costs z = (x - y) = Revenues - Profits Profits if data point falls below this line
0 50 100 150 200 250

0

Revenues, x [$, millions]
Figure 7a: Graph of the financial data for Ford Motor Company for the period 2000-2011. The graph of profits versus revenue reveals a lot of “scatter” and no trends are decipherable. However, by calculating costs (i.e., the overall “effective costs” for Ford (all obligations, including taxes are met), we see a nice upward trend. Costs, z = (x – y) is deduced from (Revenues – Profits).

Page 22 of 62

Then the “plunge”, or turnaround (??), began with a slightly higher profit with higher revenues in both 2004 and 2005, followed by the biggest loss in 2006. Between 2009 and 2010 both revenues and profits increased. Likewise, between 2006 and 2007, there was a jump in the revenues with a concurrent reduction in the reported losses. And, now the biggest profit ($20.2 billion in 2011) has come with significantly lower revenues ($136.26 billion in 2011 compared to $176. 9 billion in 2005, or reduction in revenues of $40 billion compared to 2005). This means that Ford has actually changed its “costs” structure very significantly.

Table 4: Ford Motor Company Annual Revenues-Profits data (2000-2011)
Year Revenues, x $, billions 116.283 128.954 136.264 146.3 160.123 160.7 162.6 164.3 169.1 171.6 172.5 176.896 Profits, y $, billions 2.717 6.561 20.213 -14.7 -12.613 -5.3 0.3 0.5 3.5 3.5 -2.7 1.44 Profit Costs = Revenues – Margin, y/x Profits = (y-x) 0.0234 0.0509 0.1483 -0.1005 -0.0788 -0.0330 0.0018 0.0030 0.0207 0.0204 -0.0157 0.0081 113.566 122.393 116.051 161 172.736 166 162.3 163.8 165.6 168.1 175.2 175.456

2009 2010 2011 2008 2006 2001 2002 2003 2000 2004 2007 2005

The financial data has been sorted out intentionally to reveal the trend with increasing revenues. From a staggering loss of $12.6 billion in 2006, Ford has reported an astounding profit of $20.2 billion in 2011. (Yes, more than $20 billion in profit just about one-half the loan given to GM to keep alive. Cheers!) At this point, there is no clear “mathematical law” that appears to relate the rather “erratic” nature of the profits variations for Ford. In fact, after eliminating the extreme points (2011 unusually high profit, and 2008, 2006 massive losses; the
Page 23 of 62

other losses are not eliminated and counterbalance the years of decent profits), it appears that the best one could argue for (from a statistical standpoint) is a Type III behavior for Ford, see analysis of the quarterly data in Figures 7b and 7c. The raw data analyzed here may be found in Appendix 4 at the end of this document.
15

Quarterly Profits, y [$, billions]

10

5

0

-5

-10

0

20

40

60

80

100

Quarterly Revenues, x [$, billions]
Figure 7b: The Quarterly Profits-Revenues plot for Ford (4Q2007-1Q2012) revealing what appears to be Type III behavior. Data for 21 consecutive recent quarters is plotted here. The conclusion of Type III behavior is being presented, reluctantly, after a careful analysis. Because of the “erratic” variation in profits, the costs-revenue plot was prepared first to study the statistically significant trend. The three “outliers”, or “extremes” in the data, indicated by the solid dot, were eliminated from the linear regression analysis. These are 4Q2011 (34.6, 13.6) with a huge profit, and two quarters with huge losses 2Q2008 (38.6, -8.667) and 4Q2008 (29.4, -5.875). The costs-revenues equation was determined first. This then yields the following profits-revenue equation P = -0.0776R + 3.311, superimposed here on to the x-y scatter graph.

Page 24 of 62

4

Quarterly Profits, y [$, billions]

3 2 1 0 -1 -2 -3 -4 0 10 20 30 40 50 60 70 80 90 100

Quarterly Revenues, x [$, billions]
Figure 7c: A closer look at the same quarterly profits-revenues plot for Ford (4Q2007-1Q2012) presented in Figure 7b. A profit was reported for only 11 of the 21 quarters studied here, suggesting the apparent Type III behavior. Hence, it appears that Ford Motor Company must be operating to the right of the peak of the general Profit-Revenue curve (see Figure 8 and discussion in the next section) envisioned by the mathematical analysis. If true, this suggests that Ford must use a counterintuitive strategy and actually shrink operations to reduce its revenues and closer to the maximum point – to reverse the slope of the profits-revenue graph. Indeed, the biggest annual profit ($20.2 billion in 2011) has actually come with significantly lower revenues ($136.26 billion in 2011 compared to $176. 9 billion in 2005). Amazingly, reducing the revenues by $40 billion, compared to 2005, has increased profits from $1.44 billion in 2005 to $20.2 billion in 2011. Like Tiger Woods struggling to regain his world dominance in golf, with pars and bogeys coming at random, Ford has been reporting losses and profits and is “all over the map” in the profits-revenues space. It is instructive, however, to consider the last column of Table 4 where Costs = Revenues – Profits = (x – y) = z is listed.
Page 25 of 62

A graph of revenues x versus z reveals a nice upward trend with the data points falling above and below the line z = x. This line is like “par for the course” in golf. Points above the graph represent losses, with costs exceeding revenues. These are akin to bogeys in golf. Points below the line are akin to birdies and represent profits, with costs lower than revenues. When Ford begins to operate like a more consistent world class golfer, we will begin to see a nice linear law in the profits-revenues space with a minimal of scatter, with all the data points lining up nicely along a straight line with a positive slope y = hx + c, instead of the negative slope now evident in Figures 7b and 7c. It would be revealing to take a closer look at the profits-revenues data for Ford over say the last 20 years to see if there is indeed a “maximum point” as suspected by this analysis.

§ 5. The Vision of Mount Profit
The Synthesis of Types I, II, and III
The above discussion of three basic Types of companies permits us to take the next logical step in our theoretical speculations regarding the general nature of the behavior of companies in profits-revenues space. This is now illustrated by the continuous curve in Figure 8, with a maximum point, which can be viewed as the “envelope” for all the three types of behavior just discussed. If x is revenues and y is profits, we just got our first glimpse of Mount Profit! The power-exponential law, discussed earlier in Refs. [1] and [2], can be used to mathematically model this generalized behavior. Taking the simplified version of Planck’s law, i.e., Wien’s law, y = mxne-ax, it can be readily shown, that the rate of change of y with respect x (growth of profits with increasing revenues, of interest to us), given by the derivative dy/dx of this function, is given by equation 5 below.

Page 26 of 62

dy/dx = (n – ax) (y/x) dy/dx = n (y/x) for power law, with a = 0 dy/dx = (y/x) = h for linear law with n = 1 and c = 0
50 40

…………(5) …………(6) …………(7)

Type II

Profits, y [$, millions]

30 20 10

Type III
0

Type I
-10 -20 -30 0 5 10 15 20 25

Revenues, x [$, millions]
Figure 8: The envelope of the combined Types I, II, and III modes – the continuous dashed curve – described mathematically by the power-exponential laws. The results for the power-law (a = 0, b = 0, c = 0 in the general equation) and the linear law (with n = 1 and c = 0) are deduced as special cases from equation 5. Unlike the situation with equations 6 and 7, which do not allow for the derivative dy/dx to go to zero for any value of x, the power-exponential law shows that when n = ax, dy/dx = 0. Hence, the derivative dy/dx is positive and the graph rises and has a positive slope until x = n/a. A company exhibiting Type I or Type II behavior is operating to the left of the maximum point of the power-exponential curve.
Page 27 of 62

“Local” segments of this more general curve reveal the apparently linear behavior that has been emphasized so far. When x = n/a, dy/dx = 0 and the graph has its maximum point. For higher values of x, the derivative dy/dx becomes negative and this is similar to Type III behavior and leads to decreasing profits with increasing revenues. This is certainly not desirable in the financial world but we do observe this when a company is operating in a “crisis” mode. (Universal Insurance Holdings, see Figure 6, seems like an exception since it is NOT in a crisis mode and is reporting a nice profit.) These ideas could also, perhaps, be extended to the economy as a whole. Revenues now refer to government revenues and “profits” are nothing more than the “budget surplus”. If revenues exceed the outlays (or government expenditures) we have a period of decreasing budget surplus with increasing government revenues. The US economy, with its “budget deficits” must be operating in the negative portion of the Type III line, or the power-exponential curve! The ideas presented here are by no means complete and need further refinement. Nonetheless, it appears that one can start “fresh” and take a new look at the functioning of the financial and business world and the economy as a whole, starting with some of the simple mathematical models described here.

§ 6. Nonlinear behavior
Acceleration and deceleration of profits
As discussed briefly, the power-law, with n < 1 or n > 1, is the simplest type of non-linear behavior that we can envision. The power law can be rewritten as equation 8 below. Again, we have three possibilities. y = mxn + c dy/dx = n (y – c)/x …………(8) …………(9)
Page 28 of 62

1600

(6139, 1465.4)
1400

Profits, y [$, millions]

1200 1000 800 600 400 200 0 0

Google Inc (2001-2005)

(1466, 105.6)

(3189.2, 399.2)

1000

2000

3000

4000

5000

6000

7000

Revenues, x [$, millions]
Figure 9: A closer look at the profits-revenues data for Google Inc. for 2001-2005. Notice the initial “flat” performance for 2001-2003. Google then essentially took off in 2004 and 2005 with a huge acceleration in its profits growth as revenues increased. One can easily fit a rising curve with n > 1to describe this data. Would that yield a reasonable description? The period considered here was followed, immediately, by a period of deceleration in profits growth in this profits-revenues space. This is revealed in Figure 11.  With n = 1, y = mx + c and dy/dx = m = (y – c)/x. The derivative m, the rate of change, is NOT equal to the ratio y/x because of the nonzero c, which is related to the fixed costs. This is the reason why the profit margin, as determined by the ratio y/x, increases or decreases as revenues increase, depending on the numerical value of c, as deduced for the “local” range of profits-revenues of interest to us. Increasing profit margin, observed with companies like Microsoft in the 1980s and 1990s, for example, and with

Page 29 of 62

Google, more recently, is a manifestation of the negative intercept c made by the straight line describing the profits-revenues relations.  With n < 1, dy/dx = n(y –c)/x decreases as revenues x increases. Profits increase but at a decelerating rate. This was discussed in Ref.[2] with reference to Facebook. However, Facebook is a young and emerging company and long term profits-revenues data is lacking. The need for caution in rushing to a hasty judgment about Facebook (since n < 1, see Ref. [2], profits are rising but at a decelerating rate!) may be appreciated by reconsidering the data for Google Inc. and dividing it into two sub-periods. In Figures 9 and 10 we consider the period from 2001 to 2005 and in Figure 11 the sub-period 2005-2011.  With n > 1, dyd/dx = n(y – c)/x increases as revenues x increase. Profits will therefore increase at an accelerating rate (see Figures 9 and 10). However, to date, such a sustained acceleration in profits-revenue space does NOT seem to have been reported.
1800
1600

Profits, y [$, millions]

1400 1200 1000 800 600 400 200 0 0 1000 2000 3000 4000 5000 6000 7000 8000

Google Inc (2001-2005)

Revenues, x [$, millions]

Page 30 of 62

Figure 10: The profits-revenue data for Google, for the initial period 2001-2005 can be modeled using the power-law curve n = 1.5 as indicated here. The continuous curve is the graph of the equation y = 0.0028x1.5. The index n = 1.5 = 3/2 is observed in Kepler’s third law for planetary orbits. The index n is fixed first (n = 2 seems unreasonable) and this then dictates the value of m in the power-law. The value of m = 0.0028 clearly gives good agreement with the data.
12000

Google Inc (2005-2011)
Profits, y [$, millions]
10000 8000 6000 4000 2000 0 0 10000 20000 30000 40000

Revenues, x [$, millions]
Figure 11: A closer look at the profits-revenues data for Google Inc. for the period 2005-2011. Notice the essentially “flat”period for 2007-2008. Since then, Google reveals what seems like power-law behavior with n < 1. However, such a nonlinear model misses the longer term, more stable, trend revealed by the linear law. Google can be expected to follow this linear law for the immediate future.

Page 31 of 62

§ 7. Discussion The Generalized Planck Law
The mathematical law known as the blackbody radiation law, presented formally by Max Planck, at the meeting of the German Physical Society on December 14, 1900, is an example of a law which has been tested most elegantly with the observations on the Cosmic microwave background radiation. The law derived, using statistical arguments, can readily be extended beyond the physics of blackbody radiation, where it was first conceived, to many other situations. An English translation of Planck’s original paper may be found in the book Great Experiments in Physics, Edited by Morris H. Shamos (Dover Publications, NY, 1959), pp. 301-314. Planck imagined the blackbody radiating heat as being made up of N microscopic elements, called resonators, or oscillators. As they vibrate about some mean position, the resonators radiate energy at all possible frequencies. What is the “average” energy of these resonators? This is the answer Planck was seeking back in 1900 which led him to the famous correction factor [ e-ax/(1 +be-ax) ] = [ e-ax/(1 – e-ax) ] if we set b = - 1. The steps taken by Planck to derive this “correction factor” seemed to provide an elegant theroretical justification for the simpler correction factor e-ax introduced by Wien, without offering any explantion, simply as a “curve-fitting” solution to explain the blackbody radiation data. (As noted earlier, in Ref. [2], Wien, nonetheless, received the Nobel Prize in 1911 for this important contribution.) If we just replace the words “energy” and “oscillators” by the words “money” and N “stocks”, or N “companies”, or N “revenue generators” or more generally by N “elements”, we can extend the same statistical arguments to find the “average” of some “property” (not energy, not even money, any property) of interest. Just see how Planck begins his mathematical deliberations. The following is an exact quote.

Page 32 of 62

“If one denotes the resonators by the numbers 1, 2, 3, …..N and writes these side by side, and if one sets under each resonator the number of energy elements assigned to it by some arbitrary distribution, then one obtains for every ‘complex’ a pattern of the form: Planck’s illustration of a ‘complex’ in the December 1900 paper Resonator No. Energy units 1 7 2 38 3 11 4 0 5 9 6 2 7 20 8 4 9 4 10 5

Here we assume N = 10 and P = 100.” Planck uses the term “complex” (which describes one of the many ways of accomplishing the distribution on P = 100 elementary energy units among N =10 resonators), following the terminology introduced by Boltzmann, who was the leading proponent (after James Clerk Maxwell, who was the pioneer here and also developed the electromagnetic field theory) of the application of statistics to physics and largely developed what is now called Statistical Mechanics. There are many such “complexions” (an alternative term). Entropy S is proportional to the total number of complexions W. The higher the number W the higher will be the entropy S. Here Planck is illustrating the meaning of a ‘complex’ and how a fixed total amount of energy UN is distributed. The total energy equals 100 energy elements (sum of all the numbers in the bottom row). This is distributed among the N = 10 resonators. In general, the total energy of N resonators is UN = Pε where P is a very large integer and ε (epsilon) is some unknown elementary energy element, now called the Planck energy quantum. Later in the 1900 paper, Planck introduces ε = hν, kind of out of the blue, where ν is the frequency at which the resonator is vibrating and h is a constant, now called the Planck constant. The quantity ε is the famous Planck energy quantum. Instead of energy, Planck could just as easily have been distributing a fixed amount money among N people, or a fixed total amount of revenues among N different products sold by a company, or a fixed amount of revenues among N companies in a sector of the economy, and so on. Hence, it appears that the transition from
Page 33 of 62

quantum physics to economics and finances can be accomplished almost “seamlessly” – or to use the nice comforting business cliché – “transparently”. The correction factor deduced by Planck (by continuing the above to arrive at an expression for the “average energy” on the N resonators) was needed to modify the Rayleigh-Jeans law which, mathematically speaking, is a power-law. The linear law is a special case of the power law. The power law, in turn, is a special case of the power-exponential laws, conceived first by first Wien and later by Planck, who modifies Wien’s law using the statistical arguments. Also, as noted already, Einstein starts with Wien’s law and arrives at a linear law to explain the photoelectric effect, after invoking the essential new idea introduced by Planck to arrive at the “correction” factor – there is an elementary and indivisible unit of energy or energy quantum. This, in words, is what Planck’s law means. We have already discussed now the linear law and the power-law apply in the financial and business world. We have also been able to conceptualize the idea of three basic types of companies, all of which follow the linear law. The “composite” of these three types of companies, envisioned in Figure 8, is the company that reveals the “elusive” maximum point on the profits-revenue curve. Is there such a company? Based on the discussion here, and some potential examples, it appears that there must be. At the very least we can conceptualize the existence of such a company even if we cannot find it. Companies that grow, mature, and eventually die (due to mismanagement, lack of demand for their product, etc.) due to falling profits that eventually push them to bankruptcy, must all, it appears, go through this elusive maximum point before they “die” or “disappear” or go bankrupt. It follows that the simplest mathematical law (and one that can be justified using physical and/or statistical arguments), that must describe this situation in the financial world is the generalization of Planck’s law as suggested here. Again, with some reinterpretation of the meaning of various mathematical symbols, and “nodding” our head through concepts such as entropy and

Page 34 of 62

temperature and frequency that appear in Planck’s mathematical deliberations (see original paper cited above), we can arrive at equations 10 and 11 below. y = mxn [ e-ax /(1 + be-ax) ] + c y = mxne-ax Planck b = - 1, c = 0 Wein b = 0, c = 0 …………(10) ……….…(11)

The detailed arguments can be developed but this is a purely academic exercise. Of immediate interest is the fact that equations 10 and 11 can be extended to the economics, business, and finance, by simply invoking the postulated (mathematical) equivalence between energy in physics and money in economics. Many other concepts such as entropy (which is a measure of the extent of chaos in a system) and temperature need to be clarified as well. The power law states that as x increases y increases indefinitely regardless of the exact value of the index n. If n > 1, there is an acceleration in the rate of increase of y as x increases. If n < 1, again y increases indefinitely with increasing x but will do so at a decelerating rate. In blackbody radiation studies, experiments had already revealed the existence of a “maximum point”. In the financial world and in economics, we might actually be trying to avoid the appearance of such a maximum point. Better to have profits increase indefinitely, even at a decelerating rate, than hitting a maximum point! Nonetheless, it appears that a fuller understanding of the power-exponential law and the underlying statistical arguments will help us better understand, businesses, finance, and even the economy as a whole. Is there a maximum point? If yes, what is the significance of the maximum point? The maximum point arises when the slope dy/dx changes sign and becomes negative. This is what we mean by Type III behavior. If a company changes from Type II to Type III behavior, a maximum point will be observed. Type I behavior is observed when it is operating close to x = x0. This has NOTHING to do with the “absolute” values of x in terms of $ figures. Type I behavior is seen with Facebook, Google, and also ExxonMobil. All three companies reveal Type I behavior although revenues differ by a thousand fold.

Page 35 of 62

In the business world, the transition from Type I to Type II is inevitable and not entirely adverse. However, the transition from Type II to Type III is NOT desirable and signifies decreasing profits with increasing revenues (we see glimpses of this with Best Buy, a company which is now struggling, and also, it is suspected companies like GM before the bankruptcy filing). This will eventually become unsustainable and the company will cease to exist in its present form and would have to reexamine its cost structure. When costs exceed revenues, a company will report a loss, or decreasing profits. Its creditworthiness will allow it to continue operations for a few quarters with increasing revenues producing decreasing profits, or even negative profits. But the day of reckoning will come. We do see real world examples of Type III behavior. One is Universal Insurance Holdings, Inc. currently ranked number 2 in the Fortune Small Business 100. The company has been reporting a healthy profit but its profits have decreased by exactly one-half between 2008 and 2011 with 33.7% increase in revenues ($40 billion profits with revenues of $183 billion) and 2011($20 billion in profits with revenues of $244 billion). Ford Motor Company also seems to reveal Type III behavior but this needs more careful analysis. However, Ford has increased its profits significantly in recent years with reducing revenues. Indeed, its highest profits of $20.2 billion in 2011 came with a reduced revenue level of $136.24 billion compared to 2005 when it reported $1.44 billion in profits with $176.9 billion in revenues --- reducing revenues and increasing profits – an exact reversal of the trend we now see with Universal Insurance Holding. Finally, Mount Profit (the maximum point on the profits-revenue curve) we do not want to see. But, if we do see the peak of Mount Profit, we would be better advised to stay on that side of its peak where profits continue to rise. This may the lesson here for an iconic corporate giant like Ford and also smaller and growing companies like Universal Insurance Holdings, Inc.

Page 36 of 62

Conclusions
1. An attempt has been made here to extend Planck’s radiation law from physics to economics (and perhaps, even beyond) by treating energy in physics as being synonymous to money in economics. A generalized mathematical statement of Planck’s blackbody radiation law suggests that it is nothing more than a power-exponential law, which implies a maximum point on the (x, y) graph, where x and y are any two quantities of interest with a stimulus-response type of a relationship. This also yields the linear law and the nonlinear law as special cases, as “local” segments. 2. Applying these ideas to analyze the profits-revenues behavior for a company suggests three types of linear behavior (called Type I, Type II, and Type III, for simplicity) and three types of nonlinear behavior (accelerating profits and decelerating profits, with fixed velocity or zero acceleration, linear law, being the third special case). Amazingly, a careful study of the readily available profits-revenues data from real world companies yields a PERFECT and stunning confirmation for these ideas. 3. Perhaps, the most significant conclusion of all is the speculation regarding the existence of a maximum point on the profits-revenues curve. This obviously has far-reaching implications. Indeed, the existence of three the types of companies, operating under the three types of linear laws, which is readily confirmed, also implies the existence of a company (or companies) that will reveal this speculated theoretical maximum point. 4. It appears that we may also have found this “elusive” maximum point – in Ford Motor Company. The study of the annual and quarterly profitsrevenues data for Ford suggests a Type III behavior which implies that Ford Motor Company may be operating to the right of the maximum point. Reducing revenues further will change Ford from Type III to Type II and eventually to Type I – and operating more like ExxonMobil. However, a more careful analysis of Ford’s operations and the profits-revenues data

Page 37 of 62

since 2000, and even earlier years, is needed to fully support these conclusions. 5. Like modern heat engines (automotive engines, locomotive engines, aircraft engines, rocket engines, and the engines on smaller household appliances like lawnmovers, snowblowers, other garden tools, etc.), that deliver any desired horsepower consistently, every single time the engine is turned on, perhaps, a day will come when we can learn to operate companies like they are meant to be – like a real “Profits Engine” – delivering profits for their stockholders and immense benefits to their employees, the local community, and to society at large. It could all begin, amazingly, with a fuller understanding of the significance of the power-exponential law conceived by Planck in 1900, to solve a vexing problem in physics. One of the “Two Clouds” hanging over 19th century physics, as Lord Kelvin put it, would thus dissipate, at the very dawn of the 20th century. The other cloud dissipated in 1905, with the enunciation of the theory of relativity by Einstein. y = mxn [ e-ax/(1 + be-ax) ] + c

Page 38 of 62

Appendix 1 Rampant Cost-cutting: Poor American Productivity
The extension of the mathematical laws discussed here to the business world has been of interest to me, personally, since the fateful summer of 1998 when General Motors faced a crippling strike and the world’s largest (automotive) company literally stopped producing cars – yes, the whole manufacturing operation came to a standstill and not a single car was being produced – due to the ripple effect of the “just-in-time” delivery systems that we now have in place. As the strike continued, parts were not shipped to the assembly plants and in a matter of days the whole manufacturing operation came to a grinding halt. As a GM employee, working at their Research and Development Center in Warren, MI, during the two-week mandatory summer vacation, I got intrigued by the newspaper reports about how inefficient GM was and its poor labor productivity and how all of its lazy workers were just collecting their fat paychecks. In fact all of American labor was considered to have an extremely low productivity compared to the Japanese. Yes, “American” blue collar workers, working in Japanese plants in the USA, were more efficient than those working for GM, or Ford, or Chrysler! This “productivity gap” literally meant billions of dollars of lost profits for a huge company like GM (and yes, Ford and Chrysler as well). Cost-cutting became the mantra and “outsourcing” of manufacturing operations seemed to be the obvious solution to all these bloated, out-of-control, labor costs. This idea of “outsourcing” was nothing new and was already being implemented in other sectors of the economy (like the manufacturing of shoes, garments, etc. TVs, cameras, cellphones, etc.). Now, iconic American businesses, like GM and Ford, decided to “outsource ” car manufacturing (to Canada and Mexico) to cut labor costs. Soon to follow might be the outsourcing accountants, and doctors and, yes, lawyers (ah, that will be the day!). Medical tourism, as it is called, to control rising healthcare costs, is essentially “outsourcing” of doctors.

Page 39 of 62

As a R&D person all of my professional life, I was fascinated by the large volumes of x and y data being compiled for the North American automotive plants (in the USA, Canada, and Mexico) owned by GM, Ford and Chrysler (the inefficient trio) and their Japanese counterparts (Toyota, Honda, and Nissan, the efficient trio, or the transplants). Here x is number of vehicles produced and y is the number of workers or labor hours. Labor productivity is the ratio y/x. In such analyses, the productivity metric ‘number of workers per vehicle, or WPV’ was soon replaced by the more politically correct metric of ‘labor hours per vehicle, or ‘HPV’. Labor productivity, y/x = WPV = Number of workers /Number of vehicles ….(1) Or, Productivity y/x = HPV = Labor hours/Number of vehicles .…(2) As revenues increase, we expect profits to increase. In the same way, as the number of workers (or labor hours) employed in a plant x increases, the number of vehicles produced y will also increase. Is there a simple law relating x and y in this problem? Is this law y = hx with a nonzero intercept? Or, is it the law y = hx + c with a finite nonzero intercept? What is the significance of a nonzero intercept in this problem, if there is one? That is how I realized the significance of the nonzero intercept c in the linear law y = hx + c or y = mx + c, or y = ax + b. These are all just different ways of writing the same law. In general, when x = 0, y = c and c = 0 is a special case of this general law. The ratio y/x can be used for comparisons if and only if c = 0. The automotive labor productivity data (or more generally, any labor productivity data, compiled monthly by Bureau of Labor Statistics for the whole economy) can be described by the linear law y = hx + c, not the law y = mx. How many hours does it take to assemble a cellphone, a camera, a TV, a refrigerator, etc.? One can keep gathering such data and before we realize it, we will see that Type I behavior (h > 0 and c < 0) has been replaced by Type II behavior (h > 0 and c > 0). Not too long ago, I distinctly remember, a Wall Street Journal article explored the politically sensitive question, “Unemployment rate is going down but the number of unemployed just keeps going up”. This (labor productivity, unemployment data) is an infinitely more complex and politically charged topic and is best avoided.
Page 40 of 62

Just prepare a x-y “scatter” graph and see if we are observing Type I or Type II behavior. Sometimes, we even observe Type III behavior, as with traffic fatality statistics - another “hot” politically charged topic with Texas have raised its speed limit to 85 mph and soon, who knows, someone will be pushing for the 100 mph speed limit! They already tried NO SPEED LIMIT, or prudent speed limit, in Montana and the law was quickly overturned in the courts. Some of my writings on this topic (Does Speed Kill?) can be found by those who know how to “Google” it. There is NO END to the (ab)use of the ratio y/x. They are the easiest to calculate. But lurking in the shadow is that nonzero intercept c, or what Einstein so wisely called the “Work Function” in this 1905 Nobel Prize winning paper on the Photoelectric Effect, that affects the whole social, political, environmental, financial, and economic world around us. Work Done = Heat In – Heat Out (describes the working of any “heat engine”) K = (E – W) Kinetic Energy of photoelectron K, Energy of Photon (hf), Work function (W) Profits = Revenues – Costs Savings = Income – Expenses Budget Surplus = Government Revenues – Government Outlays Number of Unemployed = Total Labor force – Number of Employed The first two equations here in the above list are simple statements of far-reaching laws that forever changed physics in the 19th and 20th centuries. The equations that follow are also simple statements, all implying that a nonzero intercept, that even Planck recognized in 1900, must always be introduced. The entropy SN = k ln W + constant. Writes Planck in his Decembeer 1900 paper that laid the foundations of Quantum Physics (see reference cited in §7 to read the English translation of the original paper), “We now set the entropy SN of the system (of N resonators) as proportional
Page 41 of 62

to the logarithm of W (the number of possible “complexions”), within an arbitrary additive constant, so that N resonators together have an energy E N.” We cannot overlook the significance of these “arbitrary additive constants” which, in words, describe what the symbol c is doing in the simple linear law y = hx + c. The correct equation for entropy SN = k ln W + S0 where S0 is the entropy in the limit when W = 1, that is there only ONE single complexion, or just one single way to distribute energy, or money, or whatever it is that is of interest to us. Physicists have spent a lot of time thinking about the meaning of S0 and even arrived at what is sometimes called the Third Law of Thermodynamics to finally dismiss this annoying “arbitrary additive” constant; see link below. And, then they realized that in their “theory” they had forgotten something and a new law had to be added. But, there were three laws of thermodynamics in place already. So, they called their “fourth” law the “Zeroth” law of thermodynamics. It must precede the other three laws. http://en.wikipedia.org/wiki/Third_law_of_thermodynamics http://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics We must understand all these laws thoroughly, and their deeper implications, as we try to extend Planck’s mathematical law (and all of its theoretical and statistical underpinning) from quantum physics to economics and to other systems (such as labor productivity studies, traffic fatality studies, etc.) far beyond where it was found to be useful when first conceived in 1900. The following, extracted from the above, makes instructive reading: Sommerfeld in 1951 gave the title the "Zeroth Law" to the statement "Equality of temperature is a condition for thermal equilibrium between two systems or between two parts of a single system"; he wrote that this title followed the suggestion of Fowler, made when he was giving an account of a certain book.[22] Sommerfeld's statement took the existence of temperature for granted, and used it to specify one of the characteristics of thermodynamic equilibrium. This is converse to many statements that are labeled as the zeroth law, which take thermal equilibrium for granted and use it to contribute to the concept of temperature. We may guess that Fowler had made his suggestion because the
Page 42 of 62

notion of temperature is in effect a presupposition of thermodynamics that earlier physicists had not felt needed explicit statement as a law of thermodynamics, and because the mood of his time, pursuing a "mechanical" axiomatic approach, wanted such an explicit statement. The first law of thermodynamics, Work = (Heat In – Heat Out) describes the working of a “heat engine” and is essentially the same as the law of conservation of energy. The second law is more enigmatic and is sometimes stated as follows: Heat flows of its own accord only from a body at high temperature to a body at a lower temperature. The question posed by Sommerfeld is “What is temperature?”, which we all seem to understand intuitively. We take it for granted. How is it measured? How do we use the idea of a “temperature” in areas outside physics – like a “hot” stock, a “hot” golfer, a “hot” political candidate and so on.

As always, “We live in interesting times.” Cheers!
http://www.huffingtonpost.com/2008/01/21/short-on-economicunderst_n_82529.html Senator Phil Gramm with Presidential candidate McCain. http://www.democraticunderground.com/discuss/duboard.php?az=view_all&addre ss=389x2703670 http://ca.answers.yahoo.com/question/index?qid=20110919101928AAt5PmU P. S. There is also another law, similar to the second law of thermodynamics in the world of economics. “Only a rich man can give a job to a poor man. Or, money flows of its own accord from the rich to the poor.” ******* Or, “When a poor man has money in his pocket, he goes out and spends it and creates far more jobs than any rich man ever did”. ************* How does money flow? Is it from customers to companies (revenues) or from the companies to its employees and suppliers (costs to the company)? Can we now use these common sense “laws” to extend quantum physics to economics?
Page 43 of 62

Appendix 2: Hubble’s 1929 Paper
http://www.mpa-garching.mpg.de/~lxl/personal/images/science/hub_1929.html Note: The term Nebulae used here by Hubble is same as “galaxy” which become more popular in later years after Hubble wrote this paper. From the Proceedings of the National Academy of Sciences Volume 15 : March 15, 1929 : Number 3 A RELATION BETWEEN DISTANCE AND RADIAL VELOCITY AMONG EXTRA-GALACTIC NEBULAE By Edwin Hubble Mount Wilson Observatory, Carnegie Institution of Washington Communicated January 17, 1929 Determinations of the motion of the sun with respect to the extra-galactic nebulae have involved a term of several hundred kilometers which appears to be variable. Explanations of this paradox have been sought in a correlation between apparent radial velocities and distances, but so far the results have not been convincing. The present paper is a re-examination of the question, based on only those nebular distances which are believed to be fairly reliable. Distances of extra-galactic nebulae depend ultimately upon the application of absolute-luminosity criteria to involved stars whose types can be recognized. These include, among others, Cepheid variables, novae, and blue stars involved in emission nebulosity. Numerical values depend upon the zero point of the periodluminosity relation among Cepheids, the other criteria merely check the order of the distances. This method is restricted to the few nebulae which are well resolved by existing instruments. A study of these nebulae, together with those in which any stars at all can be recognized, indicates the probability of an approximately uniform upper limit to the absolute luminosity of stars, in the late-type spirals and irregular nebulae at least, of the order of (photographic) = -6.3.[1] The apparent luminosities of the brightest stars in such nebulae are thus criteria which, although rough and to be applied with caution, furnish reasonable estimates of the distances of all extra-galactic systems in which even a few stars can be detected.

Page 44 of 62

TABLE 1 NEBULAE WHOSE DISTANCES HAVE BEEN ESTIMATED FROM STARS INVOLVED OR FROM MEAN LUMINOSITIES IN A CLUSTER object S. Mag. L. Mag. N.G.C.6822 598 221 224 5457 4736 5194 4449 4214 3031 3627 4826 5236 1068 5055 7331 4258 4151 4382 4472 4486 4649 Mean ms r v mt ms .. .. .. .. .. .. 17.0 17.3 17.3 17.8 18.3 18.5 18.5 18.5 18.5 18.7 19.0 19.0 19.5 20.0 .. .. .. .. r 0.032 0.034 0.214 0.263 0.275 0.275 0.45 0.5 0.5 0.63 0.8 0.9 0.9 0.9 0.9 1.0 1.1 1.1 1.4 1.7 2.0 2.0 2.0 2.0 v + 170 + 290 - 130 - 70 - 185 - 220 + 200 + 290 + 270 + 200 + 300 - 30 + 650 + 150 + 500 + 920 + 450 + 500 + 500 + 960 + 500 + 850 + 800 +1090 mt 1.5 0.5 9.0 7.0 8.8 5.0 9.9 8.4 7.4 9.5 11.3 8.3 9.1 9.0 10.4 9.1 9.6 10.4 8.7 12.0 10.0 8.8 9.7 9.5 Mt -16.0 17.2 12.7 15.1 13.4 17.2 13.3 15.1 16.1 14.5 13.2 16.4 15.7 15.7 14.4 15.9 15.6 14.8 17.0 14.2 16.5 17.7 16.8 17.0 ------15.5

Mt

= photographic magnitude of brightest stars involved = distance in units of 106 parsecs. The first two are Shapley's values. = measured velocities in km./sec. N. G. C. 6822, 221, 224 and 5457 are recent determinations by Humason. = Holetschek's visual magnitude as corrected by Hopmann. The first three objects were not measured by Holetschek, and the values of mt represent estimates by the author based upon such data as are available. = total visual absolute magnitude computed from mt and r.

Finally, the nebulae themselves appear to be of a definite order of absolute luminosity, exhibiting a range of four or five magnitudes about an average value M (visual) = - 15.2.[1] The application of this statistical average to individual cases can rarely be used to advantage, but where considerable numbers are involved, and

Page 45 of 62

especially in the various clusters of nebulae, mean apparent luminosities of the nebulae themselves offer reliable estimates of the mean distances. Radial velocities of 46 extra-galactic nebulae are now available, but individual distances are estimated by only 24. For one other, N. G. C. 3521, an estimate could probably be made, but no photographs are available at Mount Wilson. The data are given in table 1. The first seven distances are the most reliable, depending, except for M 32 athe companion of M 31, upon extensive investigations of many stars involved. The next thirteen distances, depending upon the criterion of a uniform upper limit of stellar luminosity, are subject to considerable probable errors but are believed to be the most reasonable values at present available. The last four objects appear to be in the Virgo Cluster. The distance assigned to the cluster, 2 x 10[6] parsecs, is derived from the distribution of nebular luminosities, together with luminosities of stars in some of the later-type spirals, and differs somewhat from the Harvard estimate of ten million light years.[2] The data in the table indicate a linear correlation between distances and velocities, whether the latter are used directly or corrected for solar motion, according to the older solutions. This suggests a new solution for the solar motion in which the distances are introduced as coefficients of the K term, i. e., the velocities are assumed to vary directly with the distances, and hense K represents the velocity at unit distance due to this effect. The equations of condition then take the form rK + Xcos(alpha)cos(delta) + Y sin(alpha)cos(delta)+ Zsin(delta) = v. Two solutions have been made, one using the 24 nebulae individually, the other combining them into 9 groups according to proximity in direction and in distance. The results are
24 objects 9 groups

X Y Z K parsecs. A D Vo

- 65 +226 -195 +465

+/+/+/+/-

50 95 40 50

+3 +230 -133 +513

+/- 70 +/-120 +/- 70 +/- 60 km./sec. per 10[6]

286deg. +40deg. 306 km./sec.

269deg. +33deg. 247 km./sec.

For such scanty material, so poorly distributed, the results are fairly definite. Differences between the two solutions are due largely to the four Virgo nebulae,
Page 46 of 62

which, being the most distant objects and all sharing the peculiar motion of the cluster, unduly influence the value of K and hence of Vo. New data on more distant objects will be required to reduce the effect of such peculiar motion. Meanwhile round numbers, intermediate between the two solutions, will represent the probably order of the values. For instance, let A = 277deg. , D = +36deg. (Gal. long. = 32deg. , lat. = +18deg. ), Vo = 280 km./sec., K = +500 km./sec. per million parsecs. Mr. Stromberg has very kindly checked the general order of these values by independent solutions for different groupings of the data. A constant term, introduced into the equations, was found to be small and negative. This seems to dispose of the necessity for the old constant K term. Solutions of this sort have been published by Lundmark,[3] who replaced the old K by k + lr + mr[2]. His favored solution gave k = 513, as against the former value of the order of 700, and hence offered little advantage.
TABLE 2 NEBULAE WHOSE DISTANCES ARE ESTIMATED FROM RADIAL VELOCITIES object N.G.C.278 404 584 936 1023 1700 2681 2683 2841 3034 3115 3368 3379 3489 3521 3623 4111 4526 4565 4594 5005 5866 Mean + + + + + + + + + + + + + + + + + + + + + v 650 25 1800 1300 300 800 700 400 600 290 600 940 810 600 730 800 800 580 1100 1140 900 650 + + + + + + + + + + + vs 110 65 75 115 10 220 10 65 20 105 105 70 65 50 95 35 95 20 75 25 130 215 r 1.52 .. 3.45 2.37 0.62 1.16 1.42 0.67 1.24 0.79 1.00 1.74 1.49 1.10 1.27 1.53 1.79 1.20 2.35 2.23 2.06 1.73 mt 12.0 11.1 10.9 11.1 10.2 12.5 10.7 9.9 9.4 9.0 9.5 10.0 9.4 11.2 10.1 9.9 10.1 11.1 11.0 9.1 11.1 11.7 -----10.5 Mt -13.9 .. 16.8 15.7 13.8 12.8 15.0 14.3 16.1 15.5 15.5 16.2 16.4 14.0 15.4 16.0 16.1 14.3 15.9 17.6 15.5 -14.5 ---------15.3

Page 47 of 62

The residuals for the two solutions given above average 150 and 110 km./sec. and should represent the average peculiar motions of the individual nebulae and of the groups, respectively. In order to exhibit the results in a graphical form, the solar motion has been eliminated from the observed velocities and the remainders, the distance terms plus the residuals, have been plotted against the distances. The run of the residuals is about as smooth as can be expected, and in general the form of the solutions appears to be adequate. The 22 nebulae for which distances are not available can be treated in two ways. First, the mean distance of the group derived from the mean apparent magnitudes can be compared with the mean of the velocities corrected for solar motion. The result, 745 km./sec. for a distance of 1.4 x 10[6] parsecs, falls between the two previous solutions and indicates a value for K of 530 as against the proposed value, 500 km./sec. Secondly, the scatter of the individual nebulae can be examined by assuming the relation between distances and velocities as previously determined. Distances can then be calculated from the velocities corrected for solar motion, and absolute magnitudes can be derived from the apparent magnitudes. The results are given in table 2 and may be compared with the distribution of absolute magnitudes among the nebulae in table 1, whose distances are derived from other criteria. N. G. C. 404 can be excluded, since the observed velocity is so small that the peculiar motion must be large in comparison with the distance effect. The object is not necessarily an exception, however, since a distance can be assigned for which the peculiar motion and the absolute magnitude are both within the range previously determined. The two mean magnitudes, -15.3 and -15.5, the ranges, 4.9 and 5.0 mag., and the frequency distributions are closely similar for these two entirely independent sets of data; and even the slight difference in mean magnitudes can be attributed to the selected, very bright, nebulae in the Virgo Cluster. This entirely unforced agreement supports the validity of the velocity-distance relation in a very evident matter. Finally, it is worth recording that the frequency distribution of absolute magnitudes in the two tables combined is comparable with those found in the various clusters of nebulae. Velocity-Distance graphs such as Figure 1 in Hubble’s 1929 paper are now called Hubble diagrams. The velocity v is plotted in kilometers per second (km/s) and distances are plotted in astronomical units called Megaparsec (Mpc), where 1 Mpc = 3.09 × 1019 km = 3.26 million light years. One light year is the distance light will travel in one year at the fixed speed of 300 million meters per second (or 299,792,458 m/s, 48 of 62 Page exactly). http://heasarc.nasa.gov/docs/cosmic/glossary.html

Velocity-Distance Relation among Extra-Galactic Nebulae.

Figure 1: Radial velocities, corrected for solar motion, are plotted against distances estimated from involved stars and mean luminosities of nebulae in a cluster. The black discs and full line represent the solution for solar motion using the nebulae individually; the circles and broken line represent the solution combining the nebulae into groups; the cross represents the mean velocity corresponding to the mean distance of 22 nebulae whose distances could not be estimated individually.

The results establish a roughly linear relation between velocities and distances among nebulae for which velocities have been previously published, and the relation appears to dominate the distribution of velocities. In order to investigate the matter on a much larger scale, Mr. Humason at Mount Wilson has initiated a program of determining velocities of the most distant nebulae that can be observed with confidence. These, naturally, are the brightest nebulae in clusters of nebulae. The first definite result,[4] v = + 3779 km./sec. for N. G. C. 7619, is thoroughly consistenct with the present conclusions. Corrected for the solar motion, this velocity is +3910, which, with K = 500, corresponds to a distance of 7.8 x 10[6] parsecs. Since the apparent magnitude is 11.8, the absolute magnitude at such a distance is -17.65, which is of the right order for the brightest nebulae in a cluster. A preliminary distance, derived independently from the cluster of which this nebula appears to be a member, is of the order of 7x10[6] parsecs. The constant K highlighted here is Hubble’s estimate of what is now called the Hubble constant. K = 500 km/s/Mpc (km/s per Mpc).
Page 49 of 62

New data to be expected in the near future may modify the significance of the dpresent investigation or, if confirmatory, will lead to a solution having many times the weight. For this reason it is thought premature to discuss in detail the obvious consequences of the present results. For example, if the solar motion with respect to the clusters represents the rotation of the galactic system, this motion could be subtracted from the results for the nebulae and the remainder would represent the motion of the galactic system with respect to the extra-galactic nebulae. The outstanding feature, however, is the possibility that the velocity-distance relation may represent the de Sitter effect, and hence that numerical data may be introduced into discussions of the general curvature of space. In the de Sitter cosmology, displacements of the spectra arise from two sources, an apparent slowing down of atomic vibrations and a general tendency of material particles to scatter. The latter involves an acceleration and hence introduces the element of time. The relative importance of these two effects should determine the form of the relation between distances and observed velocities; and in this connection it may be emphasized that the linear relation found in the present discussion is a first approximation representing a restricted range in distance.

[1]Mt. Wilson Contr., No. 324; Astroph. J., Chicago, Ill., 64, 1926 (321). [2]Harvard Coll. Obs. Circ., 294, 1926. [3]Mon. Not. R. Astr. Soc., 85, 1925 (865-894). [4]These PROCEEDINGS, 15, 1929 (167). *****************This ends Hubble’s original 1929 paper**************** http://en.wikipedia.org/wiki/Hubble%27s_law The following is “verbatim” quote extracted from the Wikipedia article cited, including the diagram.

Hubble’s law
The linear relationship between the velocity V of a galaxy and its distance D can be expressed as V = H0D. The constant H0 is the slope of the straight line and is
Page 50 of 62

now referred to as the Hubble constant. This law was actually first derived from the General Relativity equations by Georges Lemaître in a 1927 article where he proposed that the Universe is expanding and suggested an estimated value of the rate of expansion, i.e., the Hubble constant.[2][3][4][5][6] Two years later Edwin Hubble confirmed the existence of that law and determined a more accurate value for the constant that now bears his name[7]. The recession velocity of the objects was inferred from their redshifts, many measured earlier by Vesto Slipher (1917) and related to velocity by him.[8] 1 megaparsec (3.09×1019 km), Mpc

Fit of redshift velocities to Hubble's law; patterned after William C. Keel (2007). The Road to Galaxy Formation. Berlin: Springer published in association with Praxis Pub., Chichester, UK. ISBN 3-540-72534-2.Various estimates for the Hubble constant exist. The HST Key H0 Group fitted type Ia supernovae for redshifts between 0.01 and 0.1 to find that H 0 = 71 ± 2(statistical) ± 6 (systematic) km s−1Mpc−1,[21] while Sandage et al. find H0 = 62.3 ± 1.3 (statistical) ± 5 (systematic) km s−1Mpc−1.[22]

The law is often expressed by the equation v = H0D, with H0 the constant of proportionality (the Hubble constant) between the "proper distance" D to a galaxy (which can change over time, unlike the comoving distance) and its velocity v (i.e. the derivative of proper distance with respect to cosmological time coordinate; see Uses of the proper distance for some discussion of the subtleties of this definition of 'velocity'). The SI unit of H0 is s−1 but it is most frequently quoted in (km/s)/Mpc, thus giving the speed in km/s of a galaxy 1 megaparsec (3.09×1019 km) away. The reciprocal of H0 is the Hubble time.
Page 51 of 62

A recent 2011 estimate of the Hubble constant, which used a new infrared camera on the Hubble Space Telescope (HST) to measure the distance and redshift for a collection of astronomical objects, gives a value of H0 = 73.8 ± 2.4 (km/s)/Mpc. An alternate approach using data from galactic clusters gave a value of H0 = 67.0 ± 3.2 (km/s)/Mpc.[11][12]

1600 1400 1200

Velocity, v (in km/s)

1000 800 600

V = 500 d

400
200 0 -200 -400 0 0.5 1 1.5 2 2.5 3 3.5

Distance, d (in Mpc, 106 parsec)
Figure A: Velocity-distance plot for Hubble’s original data from Table 1 of the 1929 paper. (The notations v and r are used by Hubble.) These represent the data for galaxies at distances of up to 2 Mpc. Notice the negative velocities for some galaxies at distances of less than 1 Mpc. Hubble decided to overlook this data and focus only on the data for galaxies moving away. Also, superimposed on to the data is the graph of V = 500d, with the constant K = 500 being Hubble’s estimate for the slope of the V-D graph. Current estimates for the slope (called the Hubble constant) are between 60 to 80 s-1 or km/s per Mpc.
Page 52 of 62

2500

Velocity, v (in km/s)

2000

V = 500 d
1500

1000

500

0
0 1 2 3 4 5

Distance, d (in Mpc, 106 parsec)
Figure B: Velocity-distance plot for Hubble’s original data from Table 2 of the 1929 paper. These represent the data for galaxies at distances of up to 4 Mpc. Also, superimposed on to the data is the graph of V = 500d, with the constant K = 500 being Hubble’s estimate for the slope of the V-D graph. Current estimates for the slope (called the Hubble constant) are between 60 to 80 s-1 or km/s per Mpc. ****************************************************************** After the publication of his famous 1929 paper, Hubble published a few more papers (e.g. with Humason in 1931) confirming the velocity-distance relation with data obtained from even more distant galaxies than those studied in 1929. The graphs may be found in the article given in the link here: http://www.aip.org/history/cosmology/ideas/hubble-work.htm http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/expand.html http://ned.ipac.caltech.edu/level5/Sept03/Sandage/Sandage5_2.html http://apod.nasa.gov/debate/1996/sandage_hubble.html http://ned.ipac.caltech.edu/level5/Sandage2/paper.pdf
Page 53 of 62

Appendix 3 Facebook IPO Debacle How it is affecting individual investors?
Here are some links that address this topic. I have just compiled them here without comment. http://www.bloomberg.com/news/2012-05-24/facebook-investor-spending-months-salary-exposes-hype.html?cmpid=msnmoney
Michael McClafferty, a freshman finance major at Michigan State University, saw his “first big investment” turn into a $3,000 loss when he sold the shares at $35. The 19 year-old student estimates he spent $8,000 more than he wanted to while repeating orders that wouldn’t go through on the first day, and failing to cancel them because of the technical problems. “I didn’t know what happened,” he said. “Then I was like, ‘they should be able to do something about it.’ They messed up pretty big from what I see, and it hurt more people than just me.”
Ryan Cefalu, who lives with his wife and two kids in Baton Rouge, Louisiana, saw in Facebook Inc. (FB)’s much-anticipated initial public offering a chance to buffer his retirement fund. His expectations fizzled along with the stock within the first minutes of trading. For Cefalu, whose children are age 12 and 1, the first-day glitches meant more than a bad day of trading: they made him buy twice as many shares as he intended after an order he canceled went through hours later, he said. With shares of Zynga Inc. (ZNGA) slumping along with Facebook, he estimates he lost a combined $2,250 as a result of the Facebook debut debacle “I thought it would be fun to get in on the initial frenzy,” said Linda Lantz, an online marketer in Granite Bay, California, who bought 100 shares. “Now it makes me think ‘Oh god, should I bail or is it going to come back?’” “Short term fluctuations don’t bother me,” said Charles Landry of Sacramento, California, who bought 1,000 shares on May 18. “Facebook has the potential to be, in the long term, one of the iconic companies in Silicon Valley, a la Google, a la Apple.”

Contact the reporters: Danielle Kucera in San Francisco at [email protected]; Douglas MacMillan in San Francisco at dmacmillan3@b loomberg.net To contact the editor responsible for this story: Tom Giles at [email protected]

Page 54 of 62

Appendix 4 Quarterly Profits-Revenues data for Ford Motor Company
Quarter Revenues, x $, billions Profits, y $, billions Costs, z = (x – y) $, billions Comments

1Q2012 4Q2011 3Q2011 2Q2011 1Q2011 4Q2010 3Q2010 2Q2010 1Q2010 4Q2009 3Q2009 2Q2009 1Q2009 4Q2008 3Q2008 2Q2008 1Q2008 4Q2007 Q32007 Q22007 Q12007

32.4 34.6 33.1 35.5 33.1 32.5 29 31.3 28.1 34.8 30.9 27.2 24.8 29.2 32.1 38.6 39.4 45.5 41.1 44.2 43

1.396 13.6 1.649 2.398 2.551 0.19 1.687 2.599 2.085 0.886 0.997 -0.638 -1.427 -5.875 -0.129 -8.667 0.1 -2.753 -0.38 0.75 -0.282

31.004 21 31.451 33.102 30.549 32.31 27.313 28.701 26.015 33.914 29.903 27.838 26.227 35.075 32.229 47.267 39.3 48.253 41.48 43.45 43.282

Exceptional

Exceptional Exceptional

The costs in column 4 were deduced from the revenues and profits in columns 2 and 3 using the equation Profits = Revenues – Costs. The graph of C versus R is then prepared to deduce the C-R relation and hence the P-R relation. The three “exceptional” data points were eliminated from the linear regression analysis.
https://resources.oncourse.iu.edu/access/content/group/8f7ba376-1242-4e8a-0048acbde2ffaad8/StudentResources/scans/Macroeconomics.pdf Changing slope of a nonlinear curve, Cost of producing ipods. Page 55 of 62

About the author V. Laxmanan, Sc. D.
The author obtained his Bachelor’s degree (B. E.) in Mechanical Engineering from the University of Poona and his Master’s degree (M. E.), also in Mechanical Engineering, from the Indian Institute of Science, Bangalore, followed by a Master’s (S. M.) and Doctoral (Sc. D.) degrees in Materials Engineering from the Massachusetts Institute of Technology, Cambridge, MA, USA. He then spent his entire professional career at leading US research institutions (MIT, Allied Chemical Corporate R & D, now part of Honeywell, NASA, Case Western Reserve University (CWRU), and General Motors Research and Development Center in Warren, MI). He holds four patents in materials processing, has co-authored two books and published several scientific papers in leading peer-reviewed international journals. His expertise includes developing simple mathematical models to explain the behavior of complex systems. While at NASA and CWRU, he was responsible for developing material processing experiments to be performed aboard the space shuttle and developed a simple mathematical model to explain the growth Christmas-tree, or snowflake, like structures (called dendrites) widely observed in many types of liquid-to-solid phase transformations (e.g., freezing of all commercial metals and alloys, freezing of water, and, yes, production of snowflakes!). This led to a simple model to explain the growth of dendritic structures in both the ground-based experiments and in the space shuttle experiments. More recently, he has been interested in the analysis of the large volumes of data from financial and economic systems and has developed what may be called the Quantum Business Model (QBM). This extends (to financial and economic systems) the mathematical arguments used by Max Planck to develop quantum physics using the analogy Energy = Money, i.e., energy in physics is like money in economics. Einstein applied Planck’s ideas to describe the photoelectric effect (by treating light as being composed of particles called photons, each with the fixed quantum of energy conceived by Planck). The mathematical law deduced by
Page 56 of 62

Planck, referred to here as the generalized power-exponential law, might actually have many applications far beyond blackbody radiation studies where it was first conceived. Einstein’s photoelectric law is a simple linear law, as we see here, and was deduced from Planck’s non-linear law for describing blackbody radiation. It appears that financial and economic systems can be modeled using a similar approach. Finance, business, economics and management sciences now essentially seem to operate like astronomy and physics before the advent of Kepler and Newton.

Page 57 of 62

An Open Letter to all CEOs Let’s Build a Profits Engine
The following open letter was sent today, via email, to some of my friends in the Metro Detroit area and to some others who have taken some interest in these ideas. Now, I have posted this here as an Open Letter to all CEOs who are interested in more “consistently” delivering “profits” to their stockholders and increasing “value” for their customers. I really believe we can do this, using “scientific” principles. Just look at the data and stunning confirmation of the theoretical predictions regarding the three linear laws and the two nonlinear laws and the potential sighting of Mount Profit as well. ****************************************************************** Dear All: As you all know, I got intrigued by the Facebook IPO fiasco and decided to take a look at their financials. This got me back to one of my favorite topics over the last few years. I had been generalizing Planck's radiation law and applying it to understand financial behavior of companies by making the simple substitution, Energy in Physics = Money in Economics. All mathematical symbols in Planck's equation (and Einstein's photoelectric law) can then be reinterpreted and given economic or financial significance. I would like to call attention to the following where I have presented these ideas in a much more complete manner and backed it with "good" data from several leading companies. http://www.scribd.com/doc/94649587/Three-Types-of-Companies-FromQuantum-Physics-to-Economics
Page 58 of 62

In particular, I call your attention to conclusions section on pages 37 and 38, especially the very last one. If anything, this is what motivates me to continue to pursue this. When a young James Watt was looking for something to do with himself, after completing his studies, the University of Glasgow gave him some laboratory space (for free, of course) and also told him to take a look at the so-called Newcomen engine - a steam engine - that they owned. http://inventors.about.com/od/wstartinventors/a/JamesWatt.htm http://inventors.about.com/library/inventors/blsteamengine.htm http://inventors.about.com/od/indrevolution/ss/Industrial_Revo_4.htm http://en.wikipedia.org/wiki/Watt_steam_engine Back in those olden golden days, graduate students would come to college in their best clothes, even sporting a tie. When the students (who were majoring in what was called Natural Philosophy) were assembled to show them how an actual steam engine worked, this college-owned-engine would invariably huff and puff and quit. A lot of smoke, a lot of noise, but no action. Yes, it did work sometimes but NOT when the students were assembled for the show. Always an embarrassment. So, James Watt got busy and he studied how this stupid engine worked, or rather, did NOT work. Then he even talked to a Professor of Physics (as we would now call him), Joseph Black, and learned about a remarkable property of steam which we now call the latent heat. Nobody knew about this but Black. Armed with this valuable information, James Watt figured how to improve the steam engine and make it work consistently. He started "recycling" the heat in the steam and designed and built what is called the "steam condenser". He also improved the heat insulation. The thermal efficiency of the engine was more than doubled and, more importantly, it worked consistently. This is part one of the story. Now listen to part two.
Page 59 of 62

Then Watt tried to sell his engine to local industrialists. Back in those days, coal mines would frequently get flooded and then they would use horses to draw out the water from the flooded mines. James Watt talked to a lawyer friend of his and tried to start a company that would sell his new and improved steam engine to the owners of the coal mines. He told his lawyer friend (Matthew Boulton) that his engine can do the work of many horses. The coal miner can save money since they do not have to feed the horses. Of course, there is a cost associated with replacing the horse by the engine - cost of the coal needed to operate Watt's steam engine. Now, the lawyer friend asked James Watt an interesting question. "How much work can your engine do and how many horses will it replace?" http://www.newton.dep.anl.gov/askasci/phy99/phy99x45.htm A really fascinating question indeed! Newton conceived the idea of a force and also enunciated the three laws of force, or the three laws of motion. But, even Newton did not know anything about the work done by a force. As we all know today, the work W done by a force F is given by the product Fd, the product of the force F times the distance d over which the force acts. This was James Watts' most far reaching contribution to physics - and a little recognized one at that. Anyway, Bolton asked Watt something he did not have a ready answer for. So, he went back (I presume to the flooded coal mines and volunteered to do some "work" there) and got hold of a horse. He tied a big bucket to one end of a rope, passed the rope over a pulley and tied the other end of the rope to the horse. Then he made the horse walk back and forth as it raised the water from the flooded coal mine. Thus, came the idea of work and eventually a "horsepower". Work became a precisely measurable quantity. W = Fd. Even Einstein uses this idea later, in 1905, when he describes the work done by an electron when it is moved in an electric field and is under the action of the “relativistic” electrical force. But, even Einstein uses Watt's basic equation for work done by a force. A generation later, James Prescott Joule took the same idea of work and tried to
Page 60 of 62

measure the amount of "heat energy" that is equal to a given amount of "mechanical work". Joule allowed a weight to fall, which was attached to a spindle that rotated a shaft and which stirred water in a well-insulated container. The work done by the falling weight, produces frictional heat Q in the water, which raises the temperature of water. Thus, we get the precise relation between heat energy Q and mechanical work W. This also led to the far reaching law called the law of conservation of energy. In the old days, they used the unit "calorie" to measure heat energy. We still use it. Control your "calorie" intake, if you want to control your weight. But, now we also use the unit called Joule for energy, in honor of this great experimenter. We also use horsepower and the units of Watts, Kilowatts, Megawatts, etc. in honor of James Watt. One Joule (unit of energy) equals one Watt-sec, the constant power of one Watt if it can be applied consistently for one second. Now, if you understand all this, and are wondering why I am bothering you with this stupid history of engines and science, please do go back and read the last conclusion that I had mentioned. Just like we did not know how to build "heat engines" or "steam engine" until Watt did his pioneering work, we do not know how to build "Profits Engines" today. I came to this conclusion, even as far back as 2000. Now I am restating it with new data and after we have experienced a total financial collapse in 2008. If we want the economy to improve and people to get back jobs, we must learn how to build a good "Profits Engine". The gedanken (thought) experiments, such as those I have described in the above document, may be the first step. I may be at part one of the story, as told above. Now part two has to begin. I need some more associates and those who want to passionately like this to take it to the next level. A good place to start might be to find people who have connections at the highest levels at Ford Motor Company. In the early 2000s, I did know a few and did succeed in making some good connections. But, the efforts
Page 61 of 62

failed. We have to try again. Why Ford? Location. Their headquarters is right here in Dearborn, MI. Ford is also interesting in another sense. It seems to be the case of a company that has actually gone past the maximum point of the profits-revenue curve predicted by this extension of the ideas from quantum physics to economics. It is still behaving "erratically" and delivers profits erratically, inspite of all the great efforts to get this "Profits engine" running smoothly since 2000. A whopping $20 billion in profits in 2011 but what about 2012? Who knows? Is anyone willing to make any bets yet? If you agree with the last, then we need to take a look at how Mr. Bill Ford Jr. will pay attention to these ideas and getting his Profits Engine churning. As always, my sincere thanks for giving me your valuable time. But, now let's try and do something more and/or point me in the right direction. With my best regards. Very sincerely

V. Laxmanan May 28, 2012

Page 62 of 62

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close