In computer science research, a trend I’ve noticed in research is the trend of creation of an area -> research into that area until most of that theory is solved -> work on the problem in slightly different contexts.
For example, in program synthesis, the creation of the area is first formalizations of computation (thanks Turing and Church!). Finding a program that meets a general specification is an undecidable problem, but the programs themselves are recursively enumerable. We can build some naive enumeration of the programs, and be done with it, saying “they are now recursively enumerable.”
Next, the problem stops being in the purely abstract, and starts being in the practical. Processors have become faster, and SAT solvers have become insane, and program synthesis looks more possible. Now, merely having an enumeration is not enough. While all the programs are recursively enumerable, we want to know, is there a nice way we can enumerate. Can we enumerate so that simpler programs are enumerated first, under the assumption that simpler programs are more likely to be the programs needed in practice. This can be done for a while, and then certain abstractions can be made to hit a larger class of problems. SyGuS gets developed and makes it so that those problems get solved.
Then, inevitably, new problems arise. What about using synthesis on probabilistic models. This can fit into the problems SyGuS solves (although with the added complication of needing a slightly different metric for finding correct – correct meaning in this situation the best fitting non-overfit model), but it doesn’t in practicality without significant changes. Merely searching through the models doesn’t work, as this problem is slightly different than the others, the search space of likely programs is smaller, but the programs take much more time to evaluate. This causes a new problem, how do we handle this slow evaluation. Are there ways to share computation of the evaluations between models? Is a slightly differently oriented search space perhaps better for sharing evaluations than the standard approach? By looking at finer grained problems, different optimizations can be put in place, providing new research opportunities.
EDIT: Originally this was intended to be looking at how looking at smaller things provides potentially very different problems, but after rereading, much of it seems to just be merely stealing Dave’s confluences speech. Oh well, someday I’ll have a unique thought…