Web Lecture
When the objects of an inquiry, in any department, have principles, conditions, or elements, it is through acquaintance with these that knowledge, that is to say scientific knowledge, is attained. For we do not think that we know a thing until we are acquainted with its primary conditions or first principles, and have carried our analysis as far as its simplest elements. Plainly therefore in the science of Nature, as in other branches of study, our first task will be to try to determine what relates to its principles.
The natural way of doing this is to start from the things which are more knowable and obvious to us and proceed towards those which are clearer and more knowable by nature; for the same things are not 'knowable relatively to us' and 'knowable' without qualification. So in the present inquiry we must follow this method and advance from what is more obscure by nature, but clearer to us, towards what is more clear and more knowable by nature.
Now what is to us plain and obvious at first is rather confused masses, the elements and principles of which become known to us later by analysis. Thus we must advance from generalities to particulars; for it is a whole that is best known to sense-perception, and a generality is a kind of whole, comprehending many things within it, like parts. Much the same thing happens in the relation of the name to the formula. A name, e.g. 'round', means vaguely a sort of whole: its definition analyses this into its particular senses. Similarly a child begins by calling all men 'father', and all women 'mother', but later on distinguishes each of them.
Aristotle, Physics Bk 1 (Translated by R. P. Hardie and R. K. Gaye)
All levels of physics depend on some basic skills and concepts. In order to pursue the generally accepted methodology that allows us to move from observation to hypothesis to models and experiments and then on to formulating theories and laws, we need to establish how we will measure data and what uncertainty and error we can tolerate, what units we will use, and how we will express values in easy to understand, unambiguous format.
No measurement is perfect, and measurement apparatus is never ideal, so there will always be some error in all forms of measurement.An error is a deviation between the actual value of a measure and the indicated value produced by the sensor or instrument used to measure the value. Let me reiterate: Error is inherent, and is NOT the fault of the person making the measurement. Error is not the same as mistake! Understanding error can greatly improve our effectiveness in making measurements.
Joseph J. Carr, The Art of Science
The field of science we now call physics grew out of the natural philosophy of the ancient Ionians, the Greek-speaking inhabitants of what is now western Turkey. Originally, this philosophy covered all natural phenomena, but in the last three centuries, as we learned more about nature and its complexities, scientists had to specialize in order to master a particular area. Different branches of natural philosophy were identified and defined, and now are often studied independently. The forms and combinations of different types of matter is covered by chemistry. Matter and energy in living beings is the special provence of biology. Earth phenomena became the study of geology; anything beyond the earth became part of the study of astronomy, even when it doesn't have to do with stars in particular. This leaves as the topics of physics matter (of any kind) in motion, energy, force, and atomic structure (matter qua matter, rather than interactions between elements).
The line between physics and other modern scientific disciplines is sometimes very fuzzy. Physical chemistry, which deals with electron orbit descriptions, depends heavily on physics' concept of the atom. In biophysics, biologists study how animals move and sense their surroundings using electrical neural impulses, which means they must understand electrical current theory as it applies to living cells. Structural engineers must understand force and stress analysis in order to build structures which will not collapse, and electrical engineers need to understand how electrical fields interact in order to create memory devices.
One thing which distinguishes the physicist's approach to these topics from that of the chemist, biologist, or geologist, is his* use of mathematics. Physicists since Galileo and Newton are not really satisfied until they can write a simple (well, their idea of simple) mathematical description of a physical situation. They will try to describe even complex situations, such as the motion of the atmosphere during a tornado, using mathematical techniques such as chaos theory. There are both advantages and limitations to this approach, as we shall see.
Let's consider, for a moment, the nature of scientific thought, that is, how scientists think about the world around them. In physics in particular, we deal with abstractions and generalizations: all motion occurs this way, all forces act that way, and we try to reduce all kinds of events to mathematical descriptions. This method of abstracting an event from the real world has its origin in the philosophical investigations of the Ionian philosophers of Greece, six hundred years before Christ was born. They wanted to know what matter was--not just this or that particular lump of matter, but the essence that distinguishes matter from non-matter. They saw all change as a movement from one state to another, and wondered whether the movement of stars across the heavens each night had anything to do with the slow movement of plants toward the sun, water toward the sea, or man toward death. They saw patterns in many natural events, and came to the conclusion that all such events have particular causes. Repeat the cause, and the event repeats. For a given set of events, then, what is the cause? Is there more than one cause? And ultimately, can we control the cause to control the event, an question which leads us into technology and its implications.
The "process of science" described in many science texts is a simplification of a method which Galileo used to investigate motion in the 17th century. According to this method, a scientist observes some phenomenon, and devices a hypothesis to describe what is happening. A key feature of his hypothesis is a prediction about what will happen under specified circumstances. He then designs an experiment, eliminating all irrelevant conditions, forces, or influences, and tests whether his hypothesis works to predict the experimental outcome. If it works, he may use it to predict further events in similar situations, and continue testing and refining the hypothesis. If it doesn't work, he will use the experimental evidence to modify the hypothesis, and try again. An elaborate and detailed hypothesis which survives many tests and describes a wide range of phenomena becomes a theory, and if it is not disproved over time, the theory may take on the force of a natural law--a description which is largely accepted and unquestioned by the human community.
In this version of the ideal scientific method, a theory can never be absolutely proven to be "true". It is accepted because it has not yet been disproven, but the acceptance is always provisional. As long as no exceptions occur which cannot be explained as problems of error in measurement, or inability to eliminate all interfering factors (such as friction in motion experiments), the theory is acceptable. In Plato's phrase, it "saves the appearances", or accounts for everything it is supposed to cover. However, if unexplained exceptions occur, or new phenomena are observed which the theory cannot describe, it will be replaced by a better theory.
What happens when two competing theories both "save the appearances" adequately? Scientists generally choose the simpler explanation, where "simple" is a subjective judgment. This method is based on "Occam's Razor", the idea proposed by William Occam in the 14th century, which is "don't multiply hypotheses" when putting together a theory. In other words, the simplest explanation is usually the best one. If nothing else, it is generally easier to use.
The study of theory replacement is a major part of the history of science, and if you are interested, you may want to read Thomas Kuhn's The Nature of Scientific Revolutions.
Direct experimentation isn't always possible (no astronomer has star stuff in his lab), and even when we can do an experiment, we often make assumptions or simplify the situation in order to make it possible to predict the outcome mathematically. As an experimenter or observer, we need to be sure that we record as much information as possible, and sort out the implications as we become more familiar with the phenomenon under investigation. Assuming that one set of information is irrelevant may lead us to ignore a key factor influencing our experiment.
Another problem with experimenting occurs when we cannot observe a phenomenon directly, but have to infer what happens from direct measurements of related quantities. For example, suppose you want to know the area of a jar lid. You can't measure the area directly; you have to measure the radius of the lid (after determining the location of the center of the lid), and then use the relation A = pi * r 2. If you are wrong about how the radius of a circle is related to its area, (suppose you assume that pi = 3), you will get the wrong area. Many experiments we will discuss work the same way: we will measure quantities A and B in order to determine what C should be.
Many of the problems in your text will deal with idealized situations. One of my favorite dynamics problems in college began with the instruction "Assume a perfectly rigid ladder on a frictionless floor, leaning against a frictionless, vertical wall." Of course, there is no perfectly rigid ladder, and even in this age of teflon, no wall is really frictionless. But these simplifications do let us limit the factors in the situation, allow us to do the math (which would otherwise be far more complicated) and get a reasonable estimate for the consequences of sending a mass of exactly two kilograms up the ladder.
These simplified situations are simulations, which today we may be tempted think of only in the context of computers, but which have been used ever since the Ionians as a way of generalizing and simplifying situations. A simulation models the behavior of a real situation by proposing a situation in which every entity is analogous to an entity in thl real situation. A good model can become a generally accepted way of thinking about a whole set of events.
As we procede, we need to ask some questions to make our model as accurate as possible:
We always need to remember, however: a simulation is not reality. Physicists can speculate about perfect bridges across a river; engineers have to build in safety "excess" because the materials and conditions they work with are not perfect, and are not always perfectly understood.
During this course we will need to understand how to estimate answers, use scientific notation, significant figures, and how to determine errors in measurements. Your text goes into these concepts as well, but on the premise that more than one explanation may help, I've put together an example for you.
Let's start by determining the area of a postcard of Portland, Oregon (because it's there). Using my handy ruler, which is marked in millimeters and centimeters, I get the length of the long side as 152.1 mm (millimeters), give or take the .1mm. The short side is 101.2 mm, give or take .1mm.
Rapidly estimating the area, I can conclude that the postcard is roughly 10 cm * 15 cm, or 150 cm2. This is not a wild guess; it is based on rounding real data only enough to make a rapid calculation easier. Any numbers I get by punching into a calculator had better be close to that amount, or I have punched something in incorrectly
Notice also that my dimensions are multiplied as well as the numbers; I started with mm, but I wind up with mm2, which are the units appropriate to area. Besides checking that your numbers are in range, always be sure that your units are appropriate to the result. For more information on dimensional analysis, you may want to consult Appendix B in your text.
Doing the actual calculation, I find that
152.1mm * 101.2mm = 15392.52 mm2
But I didn't actually measure out the 2.52 part, and I can't claim to be accurate to 7 places. As with any measurement and calculation, I need to determine how many digits of my answer are significant, that is, how many carry real and reliable information, and how many are artifacts of the way I manipulated them to get the area. This question is especially important if I use a calculator, since it may (in division, for example) carry out an operation to a numerical accuracy far beyond any physical accuracy in the input information.
There are two rules for determining significant figures after an arithmetic operation, depending on the operation involved.
| 31.15 |
| 22.953 |
| + 16.4 |
| ———— |
| 70.503 |
Now, I happen to prefer looking at units for such areas in centimeters rather than millimeters. In order to convert the units, I need to realize that 1 cm2 = 100 mm2, so I multiply 15390 mm2 * 1 cm 2/100 mm2 = 153.9 cm2. Notice that my units cancel--all mention of mm2 goes away. Notice also that zeros on the right of this number are not significant: we eliminated the 2.52 as excess.
There are special rules for zeroes.
So those are the conventional rules for significant figures. The problem is that sometimes following these rules may result in reporting an answer less accurately than the data actually warrants. Consider the situation where we divide as follows:
The values 975 and 922 each have three significant figures, so our significant figure rules require that we have only three significant figures in the answer. But a value of 975 implies that we have an error of 1 part in close to 1000 parts, or 0.1%. We could justify reporting a rounded answer as 1.057 (four significant figures) in this case.
We can clarify which right side zeroes are significant by using scientific notation. 716.0 * 10 indicates unambiguously that the 0 is significant, since it is mentioned. The usual practice is to reduce a number to 1 decimal place and the appropriate power of ten. The error is always then plus or minus 1 tenth of the represented answer.
There are different ways of representing scientific notation. One common way is the one shown above: 593.1 * 103 would work out to 593100. Another way to represent this number, and one which is common to calculators, is 953.1E3. You may use either.
If I need to calculate real error ranges, then I should take the extreme amounts indicated by my error estimates, and work out the areas.
| Actual measurements (cm) | High possible values | Low possible values |
| 5.21 | 15.22 | 15.20 |
| * 10.12 | * 10.13 | * 10.11 |
|
|
|
|
| 152.9252 | 153.6720 | 154.1786 |
The difference is 152.9252 cm2 plus .2534cm2 or minus .2532cm2; in other words, we have a possible error of .253 cm2 or so in our final answer. An easy way to determine whether this is good or bad is to look at the difference as a percent of the "actual" measurements. In this case, we would do the operation .253/152.9252 = 1.6544035E-3 (on my calculator). This works out to .0016 or about .2%, or two parts per thousand. Whether this error is significant depends on the circumstances. A miscalculation of .2% in planetary orbit calculations may mean that your very expensive exploration satellite misses the planet entirely. In the case of my postcard, however, I probably won't notice the difference in area.
In measuring the postcard, we moved from an actual measurement to a derived quantity. Some physical quantities are always derived. The seven base quantities recognized in the Systeme Internationale (the internationally-used metric system) are
| Type of quantity | Unit |
|---|---|
| Amount of substance | mole |
| Electric current | ampere |
| Length | meter |
| Luminous intensity | candela |
| Mass | kilogram |
| Temperature | kelvin (or centigrade) |
| Time | second |
All other quantities can be expressed as some derivation of two or more of these units. Velocity is length/time, and density is area (a derived quantity from length)/mass.
Each of these units can be expanded by a prefix which tells how many base units are needed for the new unit. A kilogram is 1000 grams. A millimeter is 1/1000 of a meter. You should become familiar with the prefixes and the base units in various systems which are listed inside the front cover of your text.
Be prepared to discuss the following questions and give examples for your position.
© 2005 - 2025 This course is offered through Scholars Online, a non-profit organization supporting classical Christian education through online courses. Permission to copy course content (lessons and labs) for personal study is granted to students currently or formerly enrolled in the course through Scholars Online. Reproduction for any other purpose, without the express written consent of the author, is prohibited.