Consumer and Producer Surplus

Bingda Huang

Economics is a science that can help humans solve the problem of rational allocation and full utilization of resources. At a micro level, it can help us better understand our world and use economic thinking to view and analyze some people and things rationally. Even if we can’t know that we are making the right choices, it can help us avoid falling into some traps in life. This short article will help you understand the basic concepts of consumer surplus and producer surplus in economics and how they relate to calculus.

Before we get into consumer and producer surplus, let’s take a quick look at demand and supply curves. Basically, the quantity of a certain item produced and sold can be described by the supply and demand curves of the item. The supply curve shows what quantity, q, of the item the producers supply at different prices, p. The consumers’ behavior is reflected in the demand curve, which shows what quantity of goods are bought at various prices. See graph:

Supply and Demand Curves

The logic of the model of demand and supply is simple. By putting the two curves together, we should be able to find a price at which the quantity buyers are willing and able to purchase equals the quantity sellers will offer for sale (This price is called the equilibrium price). At equilibrium, a quantity q^* of an item is produced and sold for a price of p^* each. Notice that, under this condition, a number of consumers have bought the item at a lower price than they would have been willing to pay. (For example, there are some consumers who would have been willing to pay prices up to p_1.) Similarly, there are some suppliers who would have been willing to produce the item at a lower price (down to p_0, in fact). We define the following terms:

  • Consumer surplus measures the consumers’ gain from trade. It is the total amount gained by consumers by buying the item at the current price rather than at the price they would have been willing to pay.
  • Producer surplus measures the suppliers’ gain from trade. It is the total amount gained by producers by selling at the current price, rather than at the price they would have been willing to accept.

In brief, Both consumers and producers are richer for having traded. The consumer and producer surplus measures how much richer they are. See graph:

Consumer and Producer Surplus

In reality, sometimes, markets can produce changes. In a free market, the price of a product usually moves toward the equilibrium price, but the presence of external forces can make the price artificially high or artificially low. For example, rent control keeps prices below market value, while cartel pricing or minimum wage laws keep prices above market value. What happens to consumer and producer surplus at disequilibrium prices? Here is an example to help us find out:

The dairy industry has cartel pricing: the government has set milk prices artificially high. What effect does raising the price to p^+ from the equilibrium price have on.

Figure (1)

Figure (1) above gives a graph of possible supply and demand curves for the milk industry. Assume that the price is fixed at p^+, above the equilibrium price. Consumer surplus is the difference between the amount consumers pay (p^+) and the amount they would have been willing to pay (given the demand curve). This is the shaded area in Figure (2) below. This consumer surplus is less than the consumer surplus at the equilibrium price, as shown in Figure (2):

Figure (2)

For producer surplus: at price p^+, the quantity sold q^+ is less than at the equilibrium price. Producer surplus is represented by the area between p^+ and the supply curve when demand decreases. This area is shaded in the Figure (3) below. In this case, the producer surplus is larger at the artificial price than at the equilibrium price. (However, different supply and demand curves may lead to different answers).

Figure (3)

In summary, for the change in overall returns: the shaded area in Figure (4) below indicates the total gains from trade at a price p^+ (consumer surplus + producer surplus). The shaded area in Figure (5) below represents the total gains from trade at the equilibrium price p^*. At artificial prices, you can clearly see that the total gain from trade decreases. The total financial impact of artificially high prices is negative for all producers and consumers.

Figure (4)

Figure (5)

Since we know all about the concepts of consumer and producer surplus, it is time to introduce how they relate to the fields of calculus and mathematics. Under the field of calculus, we can use the knowledge of relevant integrals to help us quickly find the consumer or producer surplus.

Now assume that all consumers buy the item at the highest price they are willing to pay. And divide the interval from 0 to q^* into intervals of length \Delta q.

Figure (6)

As Figure (6) shows, a certain number of \Delta q items are sold at a price of approximately p_1, another \Delta q is sold at a slightly lower price of approximately p_2, the next \Delta q is sold at a price of approximately p_3, and so on. Thus, the total consumer expenditure is approximately.

Expressed in integral form:

If the equation of the demand curve is p = f (q), and if all consumers willing to pay more than p^* pay the amount they are willing to pay, then when \Delta q approaches 0, we will have consumer expenditure, which is:

We know that consumer surplus represents the extra money that ends up in the consumer’s pocket, and it is the area between the demand curve and the horizontal line p^*. Expressed in terms of the integral formula it is:

\int_ {0} ^{q^*} f(q) dq - p^*q^*

Similarly, the producer surplus represents the amount of extra money that ends up in the producer’s pocket, and it is the area between the supply curve and the horizontal line at p^* , and we assume that the supply curves have the equation: p = g (q). Then expressed in the integral equation is:

p^*q^*-\int_ {0} ^{q^*} g(q) dq

In summary, we can easily find the consumer and producer surplus using the two integral formulas above.

As stated at the beginning of this blog, economics can help you analyze things rationally, and I hope these consumer and producer surplus contents can help you solve economics or similar problems in the future.

Sources

Flooding in the Grand Canyon

Tahj Burnett

Calculus can be a helpful tool for scientists when it comes to solving real-world environmental problems that could potentially have major consequences for the environment if left unresolved. One such environmental problem exists when thinking about one of the 7 Natural Wonders of the World, the Grand Canyon. Formed by the Colorado River in Arizona, the Grand Canyon is approximately 277 miles long and about a mile deep. As you might expect, throughout history the Grand Canyon has at times had to deal with flooding given how deep it is.

Thankfully, the Glen Canyon Dam at the top of the Grand Canyon helps prevent natural flooding. In 1996, scientists decided an artificial flood was necessary to restore the environmental balance. Water was released through the dam at a controlled rate shown in the figure below. The figure also shows the rate of flow of the last natural flood in 1957.

In hindsight, this graph can help us gain some valuable information into whether or not this artificial flooding was actually helpful in reducing the flooding as well as to what extent. 

Information from the graphs: comparing the 1957 natural flood to the 1996 artificial flood

Normal rate of discharge

By looking at this graph, we see that the rate of discharge of water (in m3/s) passing through the dam is displayed on the y-axis and time (in months) is on the x-axis. To approximate the water passing through the dam before both the artificial (shown as the black curve) and natural floods (shown as the blue curve) we evaluate the y-value of the curves before the curves start to sharply increase (because of flooding).

  • The y-value of the curve before the sharp increase in the 1996 artificial flood is ~400 m³/s
  • The y-value of the curve before the sharp increase in the 1957 natural flood is ~250 m³/s

These two values signify what the “normal” rate of discharge was prior to flooding.

Maximum rate of discharge

We can also determine the maximum rates of water discharge for each flood by looking at the highest y-values on each curve. The maximum rates of discharge based on the graph are:

  • ~1250 m³/s for the 1996 artificial flood
  • ~3500 m³/s for the 1957 natural flood
Duration

To find the duration of the floods, we can this time take a look at the x-axis on the graph and estimate how the total time it took for the rate of discharge to increase to the maximum and then decrease (roughly) back down to the “normal rate” of water discharge prior to the flooding. The duration of each curve based on the graphs are:

  • ~half a month = ~15 days for the 1996 artificial flood
  • ~four months = ~122 days for the 1957 natural flood

How much additional water passed down the river in 1996 as a result of the artificial flood?

Here is where the real calculus comes in! Since our graph models two different curves that represent the rates at which water is flowing into the canyon over time we can determine the amount of additional water that passed down the river (aka the total accumulated change) by calculating the area under the curve but above that “normal rate.” This area under the curve could be calculated using the definite integral, but since we only have a graph of the functions representing the two curves, we can estimate this accumulated change by looking at the graph itself.

The area under the 1996 curve can be estimated using the formula for the area of a rectangle:

\text{Area} = \text{Length} \times \text{Width}

Per the answers we found in the previous questions, we know:

\text{Length (the maximum rate - the ``normal'' rate)} \approx 1250-400 = 850~\text{m}^3/\text{s}

\text{Width (the duration of the flooding)} \approx 15~\text{days}

\text{Area} \approx ( 850~\text{m}^3/\text{s}) \times (15~\text{days})

To simplify things we can also convert the 15 days → 1,296,000 seconds so that they match the units shown on the graph. This leaves us with a final answer of:

\text{Area} \approx ( 850~\text{m}^3/\text{s}) \times (1,296,000~\text{s}) = 1.1016 \times 10^9~\text{m}^3

of additional water.

How much additional water passed down the river in 1957 as a result of the flood?

Looking at the natural flooding curve, it’s clear that the shape of the curve isn’t quite the same as the artificial flood’s rectangle. Instead we’ll do the same process except we’ll use the formula for determining the area of a triangle:

\text{Area} = \frac12 \times \text{Length} \times \text{Width}

Per the answers we found in the previous questions, we know:

\text{Length (the maximum rate - the ``normal'' rate)} \approx 3500-250 = 3250~\text{m}^3/\text{s}

\text{Width (the duration of the flooding)} \approx 4~\text{months} = 122~\text{days}

\text{Area} \approx \frac12 \times (3250~\text{m}^3/\text{s}) \times (122~\text{days})

Again we can convert the days to seconds (1.054 \times 10^7~\text{s}) which leaves us with a final answer of:

\text{Area} \approx \frac12 \times (3250~\text{m}^3/\text{s}) \times (1.054 \times 10^7~\text{s}) = 1.7128 \times 10^{10}~\text{m}^3

of additional water.

It’s clear (from the graph and numbers), that the use of artificial flooding, as opposed to just allowing natural flooding, substantially reduces the amount of water discharged into the grand canyon by a significant amount.

Distribution of Resources

AnaBel Dawson

Discovering if a particular resource is distributed evenly amongst members of a population is a very important political and economic question. For instance, a country may want to know if its distribution of wealth is becoming more or less equitable over time. Additionally, economists may want to measure which country has the most equitable income distribution. How would we answer these questions? Well… we can use our knowledge of integrals and the Fundamental Theorem of Calculus!

For starters, let F(x) represent the fraction of the resource owned by the poorest fraction x of the population. If the resources were distributed evenly throughout the population, then any fraction x of the population would have the same fraction of the total resource. This means that F(x)=x (for all values of x between 0 and 1). For example, when the resource is distributed evenly, then any 20% of the population will have 20% of the resource. Similarly, any 30% will have 30% of the resource, etc. 

Since F(x) represents the fraction of the resource owned by the poorest fraction x of the population, then F(0)=0. In plain English, the poorest 0% of the population owns nothing, which is 0% of the resource. With this knowledge, it makes sense that F(1) = 1. The poorest 100% of the population would own all of the resource (100% of the resource). Values over 1 are not practically possible because we cannot account for more than 100% of the population or more than 100% of available resources. F(x) is an increasing function, since it represents an accumulating value. As x increases, the fraction of the population that is included in the calculation of the function also increases. As a result, the fraction of the resources owned by that population must also increase. In other words,  any increase in wealth must be greater or equal to what was gained from a similarly sized population (since the concern here is only on the poorest fraction of the population). Because this occurs for every increment, F(x) is concave up when put on a graph. 

However, resources are not always distributed evenly amongst a population. When a resource is not dispensed evenly, individuals may want to discover how evenly the resource is distributed. Gini’s Index of Inequality, G, is one way to measure how evenly the resource is distributed. In other words, it is a summary measure of income inequality that measures the dispersion of income across the entire income distribution.  It is defined by:

\displaystyle G = 2\int_0^1(x-F(x)) dx

In this equation, G is a measure of inequality. When graphed, G is equal to the area below the line of perfect equality minus the area below the Lorenz curve, divided by the area below the line of perfect equality. In other words, it is double the area between the Lorenz curve and the line of perfect equality. Explanation for this graph is shown below

The portion of the graph under the Lorenz curve is called the fairest distribution. Smaller areas mean a fairer distribution of resources across the population. For example, here is a graphical representation of Gini’s Index of Inequality. 

As shown above, the ideal distribution, x, depicts perfect equality. The smaller the area Gini’s Index encompasses, the closer the distribution F(x) gets to the ideal distribution (or fairest) distribution which represents total equality.  Two graphs of countries are shown below. Using Gini’s Index, we can conclude that country A has a more equitable distribution of wealth than country B.

Gini’s Index of Inequality can also be viewed as the measure of deviation from perfect equality. The minimum possible value of Gini’s Index of Inequality, G, is 0. When G is equal to zero, the resource is distributed equally among members of the population. In this instance, the graph of f(x) a straight line with a slope of 1. Perfect equality has been achieved, with each person owning an equal share of the resources. The further a Lorenz curve deviates from this straight line (when G1 = 0), the higher the Gini coefficient becomes and less equal the society. Therefore, the maximum possible value of Gini’s Index of Inequality is 1.0.  This occurs when all of the resource is owned by one person or group. In this case, the graph of f(x) represents the percentage of the maximum area between the Lorenz curve and the line of absolute equality. Here, the distribution of resources across the population has reached total inequality.

It is important to not mistake Gini’s Index for an absolute measurement of income. For instance, a high-income country and a low-income country can have the same Gini coefficient if the incomes are similarly distributed in each country. Below are graphs of perfect equality and perfect inequality which are mathematically based on the Lorenz curve (which plots the proportion of the total income of the population (x-axis) that is cumulatively earned by the bottom x% of the population. 

Congratulations! Now you have the knowledge to understand some very important questions surrounding the distribution of resources. Understanding this concept will allow you to compare countries and understand which populations are most affected by unequal distribution of resources. Unequal distribution of natural resources is one of the biggest perpetrators in economic and geopolitical power relations that can influence major conflict. In the long run, this can help raise awareness for populations and countries struggling with inequality, and help those struggling countries achieve perfect equality of resource distribution. 

Sources

  • Catalano , M. T., Leise, T. L., & Pfaff, T. J. (2009). Measuring resource inequality: The Gini coefficient. https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=1032 context=numeracy
  • Gini Index . Databank. (2023). https://databank.worldbank.org/metadataglossary/world-development-indicators/series/SI.POV.GINI
  • Hughes-Hallett, D., Gleason , A. M., Connally, E., Kalaycıoğlu, S., Lahme, B., Lomen, D. O., Lock, P. F., & Lovelock , D. (2014). In Applied Calculus (5th ed., pp. 328–329). essay, Wiley.
  • Introduction to Inequality. International Monetrary Fund . (2020, July 5). https://www.imf.org/en/Topics/Inequality/introduction-to-inequality
  • Ramzai, J. (2020, April 27). Clearly explained: Gini coefficient and Lorenz curve. Medium. https://towardsdatascience.com/clearly-explained-gini-coefficient-and-lorenz-curve-fe6f5dcdc07
  • Your guide to the Lorenz curve and Gini coefficient | indeed.com. Indeed.com. (2022, October 11). https://www.indeed.com/career-advice/career-development/lorenz-curve

Impact of Asthma on Breathing

Ashley Trattner

When I was 13, I had my first love … no just kidding (for those who didn’t connect that I was quoting Justin Beiber’s “Baby” … anyway!). At 13, I started getting serious about basketball. I was practicing for hours on end super hard and trying to get into the best shape possible for my team! Though I was running into a problem: sports induced asthma. Every time my heart rate would reach a certain intensity, I felt a huge constriction in my ability to breathe properly and get enough oxygen. It wasn’t fun! So my parents and I went to a Pulmonologist (doctors who specialize in treating respiratory issues) to figure out what was going on. In my appointment, they had me do a Spirometry test before and after running on a treadmill (pretty cool right).

A Spirometry test consists of taking a big breath in and exhaling for as long and as hard as possible into a tube that’s connected to an analyzer (like blowing out birthday cake candles). This analyzer measures the volume of air you exhale as a function of time, and converts this information into multiple graphs that represent your lung’s capacity for inhalation, exhalation, and your speed of exhalation. For example, this analyzer can create a volume-time curve graph (see Figure 4.122)⁴ which shows the time course of exhalation starting from a baseline of zero volume.¹ This allows physicians to evaluate the completeness of the end of the test, or when the patient cannot exhale any more.¹ General guidelines recommend that patients try to exhale for at least 6 seconds. If after about 15 seconds the patient is still exhaling, this may be indicative of obstructive impairment (common in obese patients or those with lung diseases).¹

VC represents a patient’s forced vital capacity, or the largest amount of air that they can forcefully exhale after breathing in. The slope of the curve represents the rate at which the patient exhales, first steeply and then leveling off. The slope of this curve will always increase sharply (the patient’s initial hard exhale), approach the maximum volume of air that the patient can push out, and then decrease constantly (the patient’s slow decline of exhalation effort as they run out of air). The X-axis represents the passage of time in seconds as the patient exhales and the Y-axis represents the volume of air in liters that the patient exhales. To evaluate the graph to decipher normality, you need to check the ratio between the forced-out air volume in the first second and the total forced capacity of the lungs. The normal value of this ratio is above 0.75-85.² Values less than 0.70 suggest airflow limitations and ratios higher than 0.85 suggest restrictive lung disease.²

Another graph this test creates would be a flow-volume curve graph, which shows the flow rate of air as a function of volume (in liters per second) (see Figure 4.123).⁴ This graph highlights the patient’s effort in exhaling and is characterized by an immediate vertical rise, a sharp peak, and a smooth descent that returns to zero flow.¹ The slope of this graph represents the change in air flow / the change in time. The area under this curve, or its integral from 0 to 1, represents the net volume of air flow / the FEV1 value (which stands for “forced expiratory volume in 1 second”).³ This area can also be used to relate the flow volume curve to the volume time curve.

The patient exhales hard in the first few seconds and this forcefulness peaks and then decreases.

Based on FEV1 values and other results, doctors can make informed decisions about a patient’s treatment plan and/or the state of their lung health. Physicians start by looking at the FVC parameter to see if it falls within normal range, next the FEV1 parameter is checked; if the FEV1/FVC ratio value is at 69% or less, there is a strong likelihood that there is some form of obstructive lung disease. Generally, it is accepted that predicted percentages for FVC and FEV1 should be above 80% and one’s FEV1/FVC ratio percentage should be above 70% to be normal.⁷ Doctors can then take this information and implement treatments. For instance, one standard intervention to alleviate airflow obstruction is inhaled medicines that reduce swelling in airways.⁸ For me, after running, my ability to exhale was hindered as compared to my pre-run breathing test. Physicians also compare results to large data pools that construct general guidelines for what “good lung health” looks like. My doctor prescribed me an inhaler to take before playing sports and it helped with my breathing immensely!

References

  1. CDC. Spirometry Quality Assurance: Common Errors and Their Impact on Test Results. 2012.
  2. FEV1 / FVC Ratio – General Practice Notebook. gpnotebook.com/simplepage.cfm?ID=-254803957.
  3. Flow Volume Loops: A Critical Analysis – Stepwards.Stepwards, 6 Mar. 2018.
  4. Hughes-Hallett, Deborah, et al. Applied Calculus. Available from: Yuzu, (5th Edition). Wiley Global Education US, 2013.
  5. Mayo Clinic. “Spirometry – Mayo Clinic.” Mayoclinic.org, 17 Aug. 2017.
  6. Moore, V.C. “Spirometry: Step by Step.” Breathe, vol. 8, no. 3, 1 Mar. 2012, pp. 232–240.
  7. NuvoAir. “Do You Know How to Interpret the Results of Your Spirometry Test?” Nuvoair.com, 2018.
  8. World Health Organization: WHO and World Health Organization: WHO. “Chronic Obstructive Pulmonary Disease (COPD).

 

The Population Centre of the United States

Kaity Taylor

The population centre of the United States can essentially be defined as the average location where people in the United States currently live. The U.S. Census describes the population centre as, ‘If the United States map was a scale and every person had equal weight, the centre of population is the place where the scale would balance’. The concept of the population centre was devised in order to track the demographic movement of people throughout the States. Since the early days of the United States, the population and, by extension, the population centre have been shifting West. This is due to factors such as economic development, exploration and expansion. The population centre allows tracking of population distribution trends.

As previously mentioned, the population centre has been shifting over the years. The first population centre was Baltimore, Maryland, in 1870. Since then, it has moved West and to the South, with the current population centre being located in Hartville, Missouri, making this the fifth time in history that the population centre has landed in Missouri. In 2000 it was in Edgar Springs, Missouri. During the second half of the 20th century, the population centre has moved about 50 miles west every ten years.

Mean Center of Population for the United States: 1790 to 2010

Mean Population Centre of the U.S. from 1970-2010


Mathematical processes such as looking at average rates of change can help us analyse this movement and understand how it is studied. For example, we can use maths to express the approximate position of the population centre as a function of time since 2000. Let’s assume t represents the number of years since 2000. Considering that the population centre moved 50 miles west every ten years, we can conclude that it moves at a rate of 5 miles per year. Thus, the approximate position of the population centre, measured westward from Edgar Springs along the line through Baltimore, can be given by the equation: \text{Position}(t) = 5t

Straight-line distance from Baltimore, MD to Edgar Springs, MO

We can also use similar processes to determine whether the population centre could have been moving at roughly the same rate for the last two centuries. If the population centre had been moving at a rate of 5 miles per year consistently for the past two centuries, the total distance covered would be

5~\text{miles/year} \times 200~\text{years} = 1000~\text{miles}.

This implies that the movement rate has remained relatively consistent, as it matches the distance between Baltimore and Edgar Springs. 


Mathematical formulas and calculations such as these are important for a variety of reasons. Having dating such as the population centre of the United States is important because it provides us with important information about the current state and trends within the United States. This data helps policymakers, urban planners, and researchers in making informed decisions regarding infrastructure development, resource allocation, and social programs. Understanding population distribution patterns is necessary in order to for addressing the needs and challenges of an ever-changing country.

Sources
“Geographic Centers of Population: United States.” Census.gov, U.S. Census Bureau, 2021, www.census.gov/geographies/reference-files/time-series/geo/centers-population/united-states.html.

“2020 Census Reveals the United States Is a Much Different Nation.” Census.gov, U.S. Census Bureau, 2021, www.census.gov/newsroom/press-kits/2021/2020-center-of-population.html.

“Center of Population.” NOAA’s National Ocean Service, U.S. Department of Commerce, 22 Mar. 2022, oceanservice.noaa.gov/news/mar22/center-population.html.

Hughes-Hallett, D., Gleason, A. M., Lock, P. F., Flath, D. E., et al. Applied Calculus. 5th ed., Wiley, 2013.

The Keeling Curve

Leah Parsons

The graph known as the Keeling Curve is the longest uninterrupted record of atmospheric carbon dioxide in the world, with data collection beginning in the 1950s. There are so many different angles from which to interpret and discuss this data, from a science lens to an ethics and values viewpoint to a sustainability interpretation. However, due to time and space limitations, as well as the purpose of writing on a blog called Got Calc?, the important information about the Keeling Curve that we will discuss here is this: It is a perfect real-life example from which we can explore different period functions, and examine the rate of change (i.e. derivatives) in each of the three models (linear, exponential, and polynomial) during a certain year.

Before we get into all of this though, here’s a little more background about the Keeling Curve:

Beginning in 1958, the curve displays data that describes seasonal and annual changes of CO₂ buildup in the middle layers of the troposphere. Climate scientist and namesake Dr. Charles David Keeling of the Scripps Institution of Oceanography first devised the graph at the Mauna Loa Research Observatory in Hawaii, where he managed air sampling efforts between 1958 and 1964. 

The y-axis unit used on the graph is parts per million (ppm), which represents the number of CO₂ molecules present per every million molecules of air. Overall, the Keeling Curve shows that average CO₂ concentrations in the air per year have increased substantially since 1959; average concentrations used to rise by about 1.3 to 1.4 ppm per year until the 1970s, but they began increasing by more than 2 ppm per year in the 2010s. The graph also shows seasonal trends; in general, CO₂ concentrations decrease during the Northern Hemisphere spring and summer months. This is due to an increase in photosynthesis as a result of the rapid vegetation growth during this season.

Now that we have some background, we can get into the calculus portion. How do we fit functions to the Keeling Curve data to actually model carbon dioxide concentrations in ppm?

Well, here are three functions that can model this general trend, with t representing number of years since 1950 for the sake of simplicity:

f(t) = 303+1.3tg(t) = 304e^{0.0038t}h(t) = 0.0135t^2+0.5133t+310.5

The first function, f(t), is a linear function model, which is the simplest and therefore best used for quicker approximations. This linear model shows us right away that there is a positive continuous growth trend in the data with respect to time, with a positive slope of 1.3 ppm/year; meaning that if the data were to fit a simple linear model starting with 303 ppm of CO₂ in 1950, the concentration should increase by 1.3 ppm each year. The linear model means that the rate of change of carbon dioxide is constant from year to year– if we are looking to predict the rate of change in a specific year, such as 2010 which is after exactly 60 years, it would still be 1.33 ppm/year; which also happens to be the derivative of the function after t = \text{any positive time}.

The next model, g(t), is an exponential function model. This function is able to give a more detailed description of the variation in carbon dioxide concentrations since 1950, because as an exponential growth curve, it now tells us that the slope of the function becomes steeper as time moves forward. In other words: the rate of change that determines the amount of carbon dioxide concentration in a specific year gets larger as time goes on. This is a representation of how a portion of carbon dioxide becomes trapped in the atmosphere each year, compounding onto the portion that was already trapped. So, the rate of change is not constant. Therefore, in contrast to the linear model, if we wanted to predict what the rate of change of CO₂ would be in a specific year like 2010, we would need to find the slope of the line tangent to the curve at that specific data point in the year 2010. This means finding the derivative of the function, and then plugging in 60 years for t (because 2010-1950 = 60). To find the derivative, we use the exponential rule, which is a variation of the chain rule: the full function g(t) multiplied by the derivative of just the power, which is 0.0038. Plugging 60 years for t into the derivative, we get a predicted rate of change of 1.45 ppm/year of CO₂ levels in 2010; this is a larger predicted rate of change for 2010 than we calculated with the linear model.

The last model, h(t), is a quadratic polynomial function model. This model insinuates that the graph is shaped like a parabola; however, we are only looking at one side of the parabola in terms of growth rate of CO₂ levels in relation to time, which can only be where t=0. Therefore, this model shows that when t=0, the graph crosses over the y-axis at 310.5 ppm of CO₂. Using this model to predict the rate of change for 2010, we would again start by finding the derivative of the function, which in this case would simply be 2(0.0135)t+ 0.5133. Plugging in 60 for t and multiplying, we get: The rate of change in CO₂ concentration levels in 2010 based on this polynomial model is 2.133 ppm/year. This is the largest predicted rate of change for our example year of 2010 out of all the models shown here, and, in comparison to actual data from 2010, also the most accurate prediction.

Here are the three functions plotted together using the Desmos graphing calculator, placed directly above a graph of the Keeling Curve:

Sources used:

https://www.britannica.com/science/Keeling-Curve

https://keelingcurve.ucsd.edu/

https://www.amnh.org/explore/videos/earth-and-climate/keeling-s-curve-the-story-of-co2/dataset-information

Typesetting Mathematics

This blog post is a short introduction to how to type mathematical expressions using LaTeX commands. On the blogging platform we are using, this is achieved by MathJax. Some documentations can be found here.

If you have a mathematical expression to type, you may start by entering “$latex” (remove quotation marks), writing the corresponding LaTeX commands, and then closing with another “$” (again, no quotation marks).

Example: A power function takes the form f(x)=x^{a} ($latex f(x)=x^{a}$)

Most of the commands for mathematical typesetting are pretty intuitive:

Most stuff:

  • Just type them!
  • For example, 1+2=3 ($latex 1+2=3$) and f'(x) ($latex f'(x)$)

Multiplication:

  • Use \times or \cdot, but not *.
  • For example, 2 \times 3 = 6 (2 \times 3 = 6) or 2 \cdot 3 = 6 (2 \cdot 3 = 6)

Fractions:

  • Use \frac{numerator}{denominator} or \frac{numerator}{denominator}.
  • Remember to enclose the entire numerator and the entire denominator in pairs of curly brackets.
  • For example, \frac{1}{x} (\frac{1}{x}) or \dfrac{1}{x} (\dfrac{1}{x})

Exponents:

  • Use ^{exponent}.
  • Again, remember to enclose the entire exponent in a pair of curly brackets.
  • For example, e^{-2x} (e^{-2x}, but not e^-2x, because the latter will produce e^-2x)

Subscripts:

  • Use _{subscript}. This is very similar to exponents.
  • For example, P_{0} (P_{0}) or a_{10} (a_{10}, but not a_10, because the latter will produce a_10)

Roots:

  • Use \sqrt{number} for square roots, and \sqrt[n]{number} for n-th roots.
  • For example, \sqrt{10} or \sqrt[3]{10} (\sqrt{10} or \sqrt[3]{10})

Special functions:

  • Use \sin, \cos, \ln for sine, cosine, and natural log functions.
  • For example, \sin(x), \cos(x), or \ln(x) (\sin(x), \cos(x), or \ln(x))

Integrals:

  • Use \int_{a}^{b} for a definite integral with limits a and b, and use \int for an indefinite integral
  • For example, \int_a^b f(x) dx or \int f(x) dx (\int_{a}^{b} f(x) dx or \int f(x) dx)

Now let’s say you have a somewhat complicated expression to put together, see if you can map the LaTeX commands to the mathematical expression below:

\dfrac{d}{dx}[\sqrt{x^2-3x+3}] = \dfrac{2x-3}{2\sqrt{x^2-3x+3}}

\dfrac{d}{dx}[\sqrt{x^2-3x+3}] = \dfrac{2x-3}{2\sqrt{x^2-3x+3}}

Welcome to Got Calc!

This is the course blog for Math 211 (Short course in calculus) in Spring 2023. As part of the course, students will be writing posts on this blog about anything related to calculus.

I will make another post soon containing a quick guide for using MathJax to typeset mathematical expressions using LaTeX commands. For example, 3^3+4^3+5^3=6^3.