### Updating supposing and maxent

By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. One might imagine that she will throw N balls into m buckets while blindfolded. If we had two partitions of an event space and knew all the conditional probabilities any conditional probability of one event in the first partition conditional on another event in the second partition , would we be able to calculate the marginal probabilities for the two partitions? In order to be as fair as possible, each throw is to be independent of any other, and every bucket is to be the same size. Bas van Fraassen gives a survey of the development of the area, and Charles Daniels points to difficulties with definite descriptions in modal contexts and stories. It cannot be determined by the principle of maximum entropy, and must be determined by some other logical method, such as the principle of transformation groups or marginalization theory. The Wallis derivation[ edit ] The following argument is the result of a suggestion made by Graham Wallis to E. The invariant measure function is actually the prior density function encoding 'lack of relevant information'. This avoids the introduction of unjustified information [ 4 ] p. The conclusion in Section 6 summarizes my claims and briefly refers to epistemological consequences. The information entropy can therefore be seen as a numerical measure which describes how uninformative a particular probability distribution is, ranging from zero completely informative to log m completely uninformative. But even so the collection displays how influential Karel Lambert has been, personally and through his teaching and his writings. It is important to note that these joint probabilities do not legislate independence, even though they allow it [ 4 ] p. There are claims in the literature that the principle of maximum entropy, from now on pme, conflicts with this generalization. Bas van Fraassen, being about the earliest student of Karel Lambert, opens the collection with some reminiscences. Among formal epistemologists, there is a widespread view that, while pme is a generalization of Jeffrey conditioning, it is an inappropriate updating method in certain cases and does not enjoy the generality of Jeffrey conditioning. These arguments take the use of Bayesian probability as given, and are thus subject to the same postulates. I will show under which conditions this conflict obtains. Justifications for the principle of maximum entropy[ edit ] Proponents of the principle of maximum entropy justify its use in assigning probabilities in several ways, including the following two arguments. The contributors are all personally affected to Joe in some way or other, but they are definitely not the only ones. Suppose an individual wishes to make a probability assignment among m mutually exclusive propositions. Wagner solves it using a natural generalization of Jeffrey conditioning, which I will call Wagner conditioning. Entropy , 17 4 , ; doi: The display is in alphabetical order - with one exception: Wagner conditioning and the pme. Once the joint probabilities and the marginal probabilities are available, it is trivial to calculate the conditional probabilities. Essays presented in Honor of Karel Lambert W.