Probability is defined in the most rigorous of settings as a sample space, a sigma algebra, and a function from that sigma algebra into [0,1]. However, in probability classes, it is often introduced from a logical standpoint. Event e has probability p, event not e has probability 1-p, and event e1 and e2 has probability e1 * probability e2 (assuming independence). Using this we can continue defining probabilities, probability of e1 or e2 is probability of not (not e1 and not e2) so 1-(1-p(e1))*(1-p(e2)), and probability of e1 -> e2 is probability of not e1 or e2, is probability of not (not not e1 and not e2), is probability of 1-p(e1)*(1-p(e2)).
We can try to extend this via the Curry-Howard Isomorphism this into the simply typed lambda calculus, providing probabilities to certain types. The probability of type A -> B becomes 1-p(A)*(1-p(B)).
Unfortunately, this natural progression from probabilities of events to probabilities of types doesn’t seem to work for what synthesis would use probabilities for. In synthesis, probabilities would more be creating a sigma algebra over the space of refinement trees, and traversing those trees in order of increasing probability. This I feel is a productive area of thought. For example, consider derivations in the simply typed lambda calculus (with unit) of Unit. there’s a simple one, unit : Unit. But there’s also a more complex one, (\x . x) unit, and also, (\x . unit) unit. It would be best for each sub-deduction to have the same sigma algebra. If somehow there is a deduction which has the same context and goal type as the original, it should have the same probabilities as the original.