Prediction Markets Wont Work, Here’s why
Prediction markets wont work.
I'm not just saying this. Serious research has been done on the subject to say the exact same thing.
Because of the know-it-all attitude I run into with many computer scientist, which I hate😁. I take extra caution to make sure I don't do the same. I come to say why prediction markets wont work after doing a massive amount of thinking with evidence to back it up. Perhaps it'll be analogical to ripping the band-aid off. Modifying expectations for investors, and forcing the producers of such a market to make solid game theoretical modifications to such markets to provide massive value over time. I'm not saying they won't work 100%, they'll slightly work. I am saying they won't be as high impact as we think they will be. For them to reach high impact, they need to make a lot of little changes to reach major objectives.
I'll be referring to Duncan Watt's book, Everything Is Obvious: \Once You Know the Answer*. He's a physicist turned sociologist, turned computer scientist. His book got me deeply into systems theory and complexity in relation to the socio-economic realm years ago. So much that I'm now focusing and staking my entire career on it, even though so far the socio-econo-physics, complexity and analytics industry hasn't yielded any productive results for the world since its inception (besides a compromise of privacy to sell things). We still haven't solved market crashes, inequality, wars between countries, global warming and massive global debt. We've provided little value so far. My goal is to at some point change that over time; to prove our worth as an industry and provide value to people using the blockchain as a medium.
The book is an easy read, and has references to why prediction markets wont work. Inside of the book he talks about common sense, and how it fails us for large scale problems. We're going to only focus on the prediction side in this piece. If I get a reasonable response for this (not necessarily a good one), I'll write more about complexity economics, social complexity and systems theory.
Reasons Why Prediction Markets Won't Work
1. Predicting Large Complex Systems Is Extremely Difficult
In chapter 7, between pages 161 and 171, Duncan Watts talks heavily about us making predictions on complex adaptive systems. Generally, the more complex and large the systems are, the more difficult it is to predict the events that follow. This is especially the case when you, and all others have a massive degree of information asymmetry.
Duncan Watts stated the following about complex systems:
In complex systems, however, which comprise most of our social and economic life, the best we can hope for is to reliably estimate the probabilities with which certain kinds of events will occur. Second, common sense also demands that we ignore the many uninteresting, unimportant predictions that we could be making all the time, and focus on those outcomes that actually matter. In reality … black swan events that we most wish we could have predicted are not really events at all, but rather shorthand descriptions—“the French Revolution,” “the Internet,” “Hurricane Katrina,” “the global financial crisis”—of what are in reality whole swaths of history. Predicting black swans is therefore doubly hopeless, because until history has played out it’s impossible even to know what the relevant terms are.
He doesn't mean that everything in existence is unpredictable. He does say later that there's a fine line between predictable elements and unpredictable ones. He followed with a statement:
To oversimplify somewhat, there are two kinds of events that arise in complex social systems—events that conform to some stable historical pattern, and events that do not … Every year, for example, each of us may or may not be unlucky enough to catch the flu … because seasonal influenza trends are relatively consistent from year to year, drug companies can do a reasonable job of anticipating how many flu shots they will need to ship to a given part of the world in a given month … consumers with identical financial backgrounds may vary widely in their likelihood of defaulting on a credit card, depending on what is going on in their lives … credit card companies can do a surprisingly good job of predicting aggregate default rates by paying attention to a range of socioeconomic, demographic, and behavioral variables. And Internet companies are increasingly taking advantage of the mountains of Web-browsing data generated by their users to predict the probability that a given user will click on a given search result.
Prediction markets don't distinguish what's reasonably predictable or what's not. The larger and more abstract the event, the more likely it is we won't be able to interpret a solid prediction of what's real or not.
2. Prediction Markets Provide Little Gain Compared to Statistical Studies
The prospect of prediction markets are very nice.
Inside of that same chapter, Watts put some focus on prediction markets. Starting with the introduction of the idea.
One increasingly popular method is to use what is called a prediction market—meaning a market in which buyers and sellers can trade specially designed securities whose prices correspond to the predicted probability that a specific outcome will take place. – p 164
Continuing with understanding why our sentiment of prediction markets is so high. I too get excited about the idea of them, and he stated the potential scenarios of how they would interact:
The potential of prediction markets to tap into collective wisdom has generated a tremendous amount of excitement among professional economists and policy makers alike. Imagine, for example, that a market had been set up to predict the possibility of a catastrophic failure in deep-water oil drilling in the Gulf prior to the BP disaster in April 2010. Possibly insiders like BP engineers could have participated in the market, effectively making public what they knew about the risks their firms were taking. Possibly then regulators would have had a more accurate assessment of those risks and been more inclined to crack down on the oil industry before a disaster took place. Possibly the disaster could have been averted.
However, he and many others have done studies in prediction markets to test if this can accurately happen.
Watts tested for the accuracy of markets compared to statistical mechanics:
little attention has been paid to evaluating the relative performance of different methods, so nobody really knows for sure. To try to settle the matter, my colleagues at Yahoo! Research and I conducted a systematic comparison of several different prediction methods, where the predictions in question were the outcomes of NFL football games. To begin with, for each of the fourteen to sixteen games taking place each weekend over the course of the 2008 season, we conducted a poll in which we asked respondents to state the probability that the home team would win as well as their confidence in their prediction. We also collected similar data from the website Probability Sports, an online contest where participants can win cash prizes by predicting the outcomes of sporting events. Next, we compared the performance of these two polls with the Vegas sports betting market—one of the oldest and most popular betting markets in the world—as well as with another prediction market, TradeSports. And finally, we compared the prediction of both the markets and the polls against two simple statistical probability that home teams win—which they do 58 percent of the time—while the second model also factored in the recent win-loss records of the two teams in question. In this way, we set up a six-way comparison between different prediction methods—two statistical models, two markets, and two polls.
Given how different these methods were, what we found was surprising: All of them performed about the same. To be fair, the two prediction markets performed a little better than the other methods*, which is consistent with the theoretical argument above. But the very best performing method—the Las Vegas Market—was only about 3 percentage points more accurate than the worst-performing method, which was the model that always predicted the home team would win with 58 percent probability.*
He's a reasonable scientist. When he crafted a result, he tested it to see if he was wrong using other data-sets. We generally call it falsification. It's the process of testing a hypothesis for inaccuracies. Doing this in the social realm is very hard, yet also requires the same amount of rigor. He followed it with another set of studies.
The first talked to the prediction market researchers:
When we first told some prediction market researchers about this result, their reaction was that it must reflect some special feature of football … Football games, in other words, have a lot of randomness built into them—arguably, in fact, that’s what makes them exciting. In order to be persuaded, our colleagues insisted, we would have to find the same result in some other domain for which the signal-to-noise ratio might be considerably higher than it is in the specific case of football.
So they tested for baseball. This was their results for that.
We compared the predictions of the Las Vegas sports betting markets over nearly twenty thousand Major League baseball games played from 1999 to 2006 with a simple statistical model based again on home-team advantage and the recent win-loss records of the two teams. This time, the difference between the two was even smaller—in fact, the performance of the market and the model were indistinguishable. In spite of all the statistics and analysis, in other words, and in spite of the absence of meaningful salary caps in baseball and the resulting concentration of superstar players on teams like the New York Yankees and Boston Red Sox, the outcomes of baseball games are even closer to random events than football games. – p170
3. Some People Want to See The World Burn
This is the final reason.
In the Dark Knight, Alfred had a power speech directly to Bruce Wayne. He made the statement "some people just want to watch the world burn". This is a problem generally faced with every game theoretical problem. It faces characters like the Joker in the Batman, Hisoka in Hunter X Hunter. The characters that just destroy for the fun of it.
Duncan Watts actually explored the concept. He states the following:
… it exposed a potential vulnerability of the theory, which assumes that rational traders will not deliberately lose money. The problem is that if the goal of a participant is instead to manipulate perceptions of people outside the market (like the media) and if the amounts involved are relatively small (tens of thousands of dollars, say, compared with the tens of millions of dollars spent on TV advertising), then they may not care about losing money, in which case it’s no longer clear what signal the market is sending.
Prediction markets forget about the idea of reflexivity, and the desire just to destroy stuff. Ultimately there's no protections against this. Even if they were to find enough active participants, you would have to worry about somebody with $1-2 million dollars just to influence somebody's perceptions on small, yet significant ideas, it could wreck havoc on people using such markets to plan. Especially if those plans are leveraged. It's a problem of opportunity cost. Generally, if I earn more for destroying your system from another, even if I earn indirectly, I'll just do it because why not?
It's essentially the same problem behind market manipulation. People would be fine destroying the market if they get some indirect benefit from it. George Soros did it when he broke the Bank of England, some unknown figures did it when they tanked the market below $6000. It's easy. There's no defense mechanism against it in international markets, where anybody with a computer can tap in and blow things up.
I recall seeing a recent article by somebody on this subreddit. He was putting together a solution to reduce the uncooperative games people may want to play and convert them into cooperative games using staking as a means to limit options. It works to an extent, but it runs the problem of destructive tendencies and opportunity cost. It also requires identity, which I doubt people will subscribe to if they don't have to.
So that concludes why I reason prediction markets wont work. It's mostly an analysis. I infer, that just because you can't use them in one way, doesn't mean you can use them in others. I would say that it's unreasonable to believe that's the case. The predictions are just too big in range and not heavily well defined.
Again, if I get feedback to this I'll post on other topics like .
Sources and Bits of Information:
- Ian Ayres (author of Supercrunchers ) calls the relative performance of prediction markets “one of the great unresolved questions of predictive analytics” ( http://freakonomics.blogs.nytimes.com/2009/12/23/prediction-markets-vs-super-crunching-which-can-better-predict-how-justice-kennedy-will-vote/ ).
- To be precise, we had different amounts of data for each of the methods—for example, our own polls were conducted over only the 2008–2009 season, whereas we had nearly thirty years of Vegas data, and TradeSports predictions ended in November 2008, when it was shut down—so we couldn’t compare all six methods over any given time interval. Nevertheless, for any given interval, we were always able to compare multiple methods. See Goel, Reeves, et al. (2010) for details.
Submitted March 30, 2019 at 07:11PM }
via reddit https://ift.tt/2HQ6kbR