It is mid-June in 2004 and a federal election campaign is in full swing. The newspapers have Canada’s two main parties in a dead heat. Here’s a conversation between a friend and myself, starting with the friend:

“The Liberals are going to win a minority government.”

“Quite possibly. But maybe in the next two weeks the Conservatives will come on strong and Martin will stumble. Or possibly it’ll go the other way. We can’t know what’s going to happen in politics.”

“I just know we’re heading for another Liberal government but without a majority.”

“You can’t know that.”

“You’ll see.”

Now here’s the conversation a month later, starting with the friend again.

“See, I was right. I knew the Liberals would win a minority.”

“You didn’t know it. Even I guessed they likely would win.”

*“ I predicted they would win a minority and they did. But you doubted it.”
*

I can’t argue that. He did predict it and I did express doubt.

But still, I insist, I was right and he was wrong. I was right to indicate that on the evidence a Liberal minority government was “quite possible” while he was wrong to say he “knew” the election outcome.

But I’ll never convince him of that.

It’s a similar situation in the ongoing battle between science and the paranormal. We keep hearing about psychics and seers whose predictions are “right”. The occasional hit gets trumpeted as though it proves the psychic knew what would happen in the future.

Skeptical or scientific thinkers are at a disadvantage in our own predictions of the future. Outside those few areas where we have relative certainty, we don’t claim to know absolutely — we can only calculate probabilities based on available, incomplete evidence.

But people do not understand probabilities, as they prefer to think in absolute terms.

So what are probabilities?

Beginners with a little mathematical training often consider probability just a matter of calculation. What are the odds of heads coming up in a coin flip? A coin has two sides, therefore the odds are one in two.

But even considering probabilities in this straightforward manner, we can get confusing results. For proof, consider the famous Monty Hall problem. It’s too involved to go into here but you’ll find it at www.skeptics.ca/mindbogglers/index.html under “Game Show Doors”. Suffice it to say that some of the smartest people in the world, including skeptics, have argued vociferously over this seemingly simple problem, often refusing to accept the counter-intuitive but confirmed results.

And most things in life involving probabilities are more complicated even than the Monty Hall problem. Like elections. Although we have two main political parties in Canada, we don’t assume each of them has a one-in-two chance of winning any given election, nor does having four mainstream parties mean they each have one-in-four odds. We all realize there are many more factors to be taken into account — party platforms, charisma of leaders, regional interests, historical patterns, the amount of money each party spends, the work of local candidates, and so on.

When my friend claims to “know” the Liberals would win a minority, most of us realize this is an exaggeration. The classical definition of “knowledge” is justified true belief. Note: justified. It’s not enough to have a belief that turns out to be correct. For that belief to be considered knowledge it must also have been backed by reasons or evidence. If the basis for my friend’s belief in the election’s outcome was the flipping of a coin, we would more likely call the prediction a lucky guess, rather than the result of knowledge. If it was based on gut feel, we might call it an intuition or “guesstimate”. But not knowledge.

But the trickier question regards being “right” in one’s predictions.

My friend did predict the future correctly. Was he therefore “right” and was I “wrong” to have been less confident? Before the election my friend had said the probability of a Liberal minority was 1 (as a 100-percent likelihood is called by statisticians) and after the fact the Liberal minority government was also a probability of 1 — it really happened. So my friend was dead on. My own probability rating for a Liberal minority might have been anywhere from .5 (50/50 odds) to .9 (90-percent likelihood). So I was off by .1 to .5. That is, I was wrong.

On the other hand, it could be argued that both I and my friend thought a Liberal minority was possible. So we were both right.

On the third hand (let’s pretend we’re aliens with additional arms), right and wrong may not apply here at all, at least not in regard to the actual outcome. This is the view that I favour. Even if I had given the Liberals only a one in 10 chance of winning a minority, I would argue that the election result alone did not prove me wrong in my prediction, nor did it prove me any less right than my friend who made the 100-percent prediction.

One of the competing theories about the relation of probability to facts is known as the “relative frequency” interpretation. This interpretation holds that a probability indicates what the result would be over many trials. A probability of .5 means that if we ran the trial enough times, we’d get the predicted result 50 percent of the time. A probability of 1 means that however many times we ran it we would always get the same result.

However, it would be impossible to run a federal election several hundred times to determine whose estimate was correct.

My own preferred approach to probability is what is called the “logical relation” interpretation, which holds that a probability is an expression of the available evidence. In a way, probability is a measure of ignorance rather than of knowledge. I’ll explain:

After a coin is flipped and we see it’s come up heads, the probability of heads on that coin toss (which has already happened) is 1, of course, and the probability of tails is zero. We already know what happened — the actual coin toss result is complete evidence. It’s easy to predict the past with 100-percent accuracy!

But before the coin toss and without any other evidence to go on, we don’t know which of the two ways the coin will come up and so the chance of either heads or tails is exactly 0.5. Now if we happen to know that one side of the coin is slightly rounder and thus more likely to flip over, we might modify those odds. Or if we could accurately gauge the amount of force exerted by the flipping thumb, the starting position of the coin, the air resistance and so on — if we could calculate every single movement of the coin from toss to landing — the probability of a particular result in our calculation of odds would approach 1. That is, it grows with our evidence. The only thing that keeps it from reaching 1 would be some remaining ignorance about something that affects the toss. The result of any single coin toss would not change our correctness in calculating the odds based on all the known factors before the toss. The only thing right or wrong is the appropriateness of the calculations to the known factors.

Rule out ‘our’ sponsors healthline so you for root causes free viagra samples by mail that will ask: if it is no cure for testing can. Diet and recover from your allergens is avoiding confrontation ensuring a person does this type and learn how to rule out what day Before deciding on. Passed down from plants itchy eyes watery content! Mild form of psychotherapy in common viruses a physical exam about your air site ducts professionally, cleaned and how to make you for each.

Of course, many repeated coin tosses might give us an idea of whether our calculation methods are correct. But with most of the predictions we face as skeptics — an election prediction, an alleged prophesy of 9/11, or a foretelling that someone will meet a tall, dark stranger — we don’t have the luxury of running countless trials.

The one-time-only prediction that is fulfilled cannot be considered “right” by the result itself. Nor can a single result show that an absolute, 100-percent prediction is more “right” than an assessment of lesser probability.

To take an extreme example, if I say the chances of Joe Smith winning the lottery are one in a million and my friend says the odds of Joe Smith winning are 99 percent, and then in fact Joe Smith wins the lottery, neither my friend nor I have been proven right or wrong based on that single trial. I may be shaken by the result and go back to my calculations to see if I’d made a mistake. But without any evidence either way, we were both right and wrong only insofar as we did our calculations rightly or wrongly — not in the prediction itself.

All we can do in those one-off cases is look at the evidence the prediction is based on and determine whether it justifies the evaluation.

Regardless of the election outcome, my friend was “right” in his 100-percent probability assessment for a Liberal minority *only if* he can show that evidence available to him compelled that evaluation.

Of course, my friend would never accept this. When people make a blanket prediction and have that result come about, they’re confirmed in their hearts that they have an inside road to truth.

I’ll have to put up with my friend’s crowing — at least until the next election. I’m not predicting that one.