Synthesis with automatically generated inputs

One of the largest issues with program synthesis right now is that it requires a lot of knowledge of synthesis to be actually used.  Indeed, sometimes the synthesis will fail to give an answer due to underspecification, and only an advanced user with deep knowledge of the synthesis engine will understand where that underspecification lies.  For that advanced user, it is easy to identify the underspecification, provide the required spec, and watch synthesis succeed.  However, users who have the knowledge of that in general don’t need synthesis.  While they can use synthesis for some small helper functions (I recently used synthesis for the generation of a cartesian map), the majority of the time those users aren’t the ones who can benefit the most from synthesis.

However, the synthesis engine itself is aware of when it’s underspecified.  For example, during a case analysis, the synthesis engine can see that one of the cases is not hit through any of the examples.  Guessing randomly in this circumstance is very likely to fail, and will very likely require some more user input to work.  Then the user will need to go through the code, confirm that it was synthesized correctly or incorrectly, and think up the right example to specify that incorrect codepath.  After that, they will need to rerun the synthesis algorithm from the start again, creating and subsequently pruning large numbers of potential programs before finally getting to the same place as before, and being able to now progress further.

Indeed, one could even see a scenario, when attempting to verify whether a program is one of two potential synthesized programs, running some static analysis on them, and using that analysis to provide an example input where the two programs deviate, and providing a user with a multiple choice about which output should be created from the input.

Lastly, one could see the potential for the user taking even more interaction.  The synthesizer could provide examples of a general nature, for example, saying “consider running f on a::b, would you like to return a) a, b) f b, c) something else, d) depends on a, e) depends on b, f) depends on a and b.  In this way, the user doesn’t need to understand syntax or how to code, but they can direct the program into the correct answer through their knowledge of how the code works.  And it can direct through some of the hardest parts of synthesis, finding when case analysis needs to be applied.

Advertisements

An IDE tool for data structure changes

A few days ago I was writing some unit tests for my code, when I realized the data structure my code was based on was incorrect.  I had, as one of the variants of my data type, RegExBase char, where I wanted RegExBase string instead.  This is because I wanted to be able to express empty string as a valid regular expression (other strings can just be built up from RegExConcat (regex * regex)).

This change was a fairly minor one, and the only real code changes I needed to make were in the pretty printer… and in the unit tests.  The changes I needed to make for the unit tests were fairly easy (search and replace ‘ for “).  However, it did highlight a critical flaw that many people who are unwilling to adopt unit testing bring up.  Adding in unit tests increases development time, and makes refactoring more difficult.  Increasing development time is up for debate, I saw a recent study that showed that TDD increased initial development time by ~20%, but caused there to be ~20% fewer bugs (I could be wrong on those percentages).  Of course, unit tests aren’t merely for decreasing bugs right there, serve as documentation for future users about the intention of the code.  However, refactoring is certainly more difficult for a user who knows the codebase.  It makes refactoring easier for future developers, as they can understand the code, but for the original author of the code, who understands how everything fits together, the unit tests are merely a barrier in the way of refactoring.

However, the IDE has a wealth of information available to it during these transformations.  Not only does it see the before and after state of the data structure, it also sees how the actual code is changed to work with the new data structure, and it knows it wants all previously failing tests to continue failing, and all previously succeeding tests to continue succeeding.  The IDE can use this information to make (or give suggestions) at automated changes to the unit tests.

But it can go a step further.  While it can automatically synthesize new unit tests, perhaps it can go the reverse direction, and a user needs to change the unit tests, and the code already changes.  This then begins to look like a normal input-output synthesis problem, but with the additional information about how the code was changed in the unit tests.  And it can then go even further, where maybe just one or two unit tests are changed, and it tries to generalize to the normal code and the rest of the unit tests.  Or maybe only one function is changed, and it tries to generalize to the rest of the code and all the unit tests.  This could make refactoring significantly easier when there are simple changes like changing a character parameter to a string parameter, especially when working with a large codebase, where large amounts of time could be spent chasing where the build failures occur.

Changes in landscape causing changes in approaches

In computer science research, a trend I’ve noticed in research is the trend of creation of an area -> research into that area until most of that theory is solved -> work on the problem in slightly different contexts.

For example, in program synthesis, the creation of the area is first formalizations of computation (thanks Turing and Church!).  Finding a program that meets a general specification is an undecidable problem, but the programs themselves are recursively enumerable.  We can build some naive enumeration of the programs, and be done with it, saying “they are now recursively enumerable.”

Next, the problem stops being in the purely abstract, and starts being in the practical.  Processors have become faster, and SAT solvers have become insane, and program synthesis looks more possible.  Now, merely having an enumeration is not enough.  While all the programs are recursively enumerable, we want to know, is there a nice way we can enumerate.  Can we enumerate so that simpler programs are enumerated first, under the assumption that simpler programs are more likely to be the programs needed in practice.  This can be done for a while, and then certain abstractions can be made to hit a larger class of problems.  SyGuS gets developed and makes it so that those problems get solved.

Then, inevitably, new problems arise.  What about using synthesis on  probabilistic models.  This can fit into the problems SyGuS solves (although with the added complication of needing a slightly different metric for finding correct – correct meaning in this situation the best fitting non-overfit model), but it doesn’t in practicality without significant changes.  Merely searching through the models doesn’t work, as this problem is slightly different than the others, the search space of likely programs is smaller, but the programs take much more time to evaluate.  This causes a new problem, how do we handle this slow evaluation.  Are there ways to share computation of the evaluations between models?  Is a slightly differently oriented search space perhaps better for sharing evaluations than the standard approach?  By looking at finer grained problems, different optimizations can be put in place, providing new research opportunities.

EDIT: Originally this was intended to be looking at how looking at smaller things provides potentially very different problems, but after rereading, much of it seems to just be merely stealing Dave’s confluences speech.  Oh well, someday I’ll have a unique thought…

Logic, Types, and Probability

Probability is defined in the most rigorous of settings as a sample space, a sigma algebra, and a function from that sigma algebra into [0,1].  However, in probability classes, it is often introduced from a logical standpoint.  Event e has probability p, event not e has probability 1-p, and event e1 and e2 has probability e1 * probability e2 (assuming independence).  Using this we can continue defining probabilities, probability of e1 or e2 is probability of not (not e1 and not e2) so 1-(1-p(e1))*(1-p(e2)), and probability of e1 -> e2 is probability of not e1 or e2, is probability of not (not not e1 and not e2), is probability of 1-p(e1)*(1-p(e2)).

We can try to extend this via the Curry-Howard Isomorphism this into the simply typed lambda calculus, providing probabilities to certain types.  The probability of type A -> B becomes 1-p(A)*(1-p(B)).

Unfortunately, this natural progression from probabilities of events to probabilities of types doesn’t seem to work for what synthesis would use probabilities for.  In synthesis, probabilities would more be creating a sigma algebra over the space of refinement trees, and traversing those trees in order of increasing probability.  This I feel is a productive area of thought.  For example, consider derivations in the simply typed lambda calculus (with unit) of Unit.  there’s a simple one, unit : Unit.  But there’s also a more complex one, (\x . x) unit, and also, (\x . unit) unit.  It would be best for each sub-deduction to have the same sigma algebra.  If somehow there is a deduction which has the same context and goal type as the original, it should have the same probabilities as the original.

Falling into the pit of success of filling conventions

Falling into the pit of success is a often toted phrase in programming, and amongst people in programming languages.  The idea behind the phrase is that in a well designed language, it is easier to write a correct program than an incorrect program.  Because it’s easier to write a correct program, the programmer will nearly accidentally write the correct program, falling into it seemingly accidentally.  This is done usually through strong typing, where incorrect programs aren’t considered programs, and through languages where the meaning of the program is easy to tell from looking at the program.  This is also done through some software engineering libraries, where the code in it reads like a sentence in English.

x.OnRequest(GET)
 .Call(handler) //TODO: better example

When another user reads this code, they can understand it as well as they can understand normal english sentences.  Similarly, writing in that code is as easy as writing in normal english sentences.

Conventions are certain restrictions imposed on developers.  These conventions can be used to remove bad and dangerous patterns.  One such type of convention is not allowing for global variables.  Another type of convention is to make the program easier to understand for readers.  For example, prepending m_ before private variables in C#.  Many of the reasons for conventions are those same reasons to get developers to fall into the pit of success.  Unfortunately, unlike a well designed programming language, or api, which will disallow through the type system and syntax things that go against conventions, conventions can be broken.  During high pressure situations, like a looming deadline, justifications can be made to break those conventions, and discipline must be exercised to keep those conventions from being broken.  However, in my experience, these justifications never are truly worth it, and don’t seem to pay off, but us humans are weak and will take that path of least resistance.  Perhaps languages should make themselves easily modifiable, allowing companies to put in their own conventions into the system.  Or perhaps in large releases of the language, existing and widely accepted conventions should be incorporated into the language.

How do humans know which library functions to use?

A programmer hears that she needs to create a function that adds up all the numbers in a list of numbers.  The programmer identifies that this is a good situation for using the built in library function List.fold_left, and so does this.

However, where does she get this knowledge that a fold is the correct function to call.  She hears the problem and recognizes that this problem has the pattern of accumulating the data into an accumulator, knows that fold often fits the pattern of accumulating data, and fills in the data so that it typechecks and performs the correct action; 0 for empty list and (+) for accumulation.

How could we inform the computer that it needs to perform that accumulation action, and know that fold is appropriate.

We first need to have formalized what the notion of accumulate is.  Certainly the definition of fold formalizes it.  The implementation of fold is shown below.

let rec fold_left (f:'a->'b->'a) (acc:'a) (l : 'b list) : 'a =
  begin match l with
  | [] -> acc
  | h::t -> fold_left f (f acc x) xs
  end

This feels too specific for an specification for something that accumulates.  I can’t however find anything more general.  This is a difficulty in trying to encode semantics, in that the semantics seem to become as complicated as the program.  Perhaps not a complete specification is correct, but to give invariants that imply, or nearly imply, the complete semantics of the program is sufficient.  For example, instead of writing out exactly what we want the accumulator to handle, we want to know that sum (l:’a list) has properties that sum [] = 0 and sum (h::t) = h+sum(t).  This is certainly more specific than merely int list -> int, but not yet a full implementation.  However, it seems to give enough information to provide a full implementation, and there is a natural way to extend the specification to an implementation.  Perhaps allowing for a stronger type system will enable more instances where the type informs the implementation in immediate ways.