I would conjecture that the reason a proof does not seem to exist, is that in a purely theoretical framework, such a model could exist for 'corner cases'.
Under the assumption that the existence of a meta-model would modify the usage of the model / effect the market, something like Russell's paradox would seem to occur.
Except in the case where a stable market/meta-market relationship could be achived. I.e. we could construct some theoretical markets (i.e., not a model, but some purely theoretical complete description of the market) in which the existence of a meta-model would not effect the market or which in a finite number of steps would converge to final market / meta-model pair in which the existence of a final meta-model does not cause changes to the market.
Without a stable covariance structure, I imagine the error terms driven by distributional assumptions and time elements of the underlying would kill any definitive statement one could make.
There are plenty of market models -- capital asset pricing model (CAPM), conditional CAPM (CCAPM), intertemporal CAPM (ICAPM), and arbitrage pricing theory (APT). But any model, finance or otherwise, requires assumptions. Under these models the market may pay you to play your strategy, but in return you must accept risk. So with one of these models you could determine the half-life of your trading model, but it would require you to make some forecast about the relevant factors in your model. And even then you would have to accept some risk-return trade-off.
The required assumptions for these models -- typically things like common information set and expectations for all players, no arbitrage, and complete markets -- aren't perfect, but I think they are the only way to build a tractable model for the whole market. It is difficult to justify a model that has predictable risk-free profit opportunities. There are just too many smart, hardworking people in financial markets.
I am not saying that I think markets are completely efficient, just that it would be difficult to build such a model.
I see the meta model as the process you employ to develop a model. The important problem is that the real world changes from continuously in order to maintain your meta model you would need to constantly enhance your middling process to account for those changes.
Good meta model is theoretical construct
Evolutionary economics provides a framework for reasoning about meta-models about markets. The primary one being ecological. Thus, predicting a model's half life would involve discussing market ecologies, niche specialization of a model, comparative fitness of models, and the like.
Imagine you had a formula that predicted what the S&P 500 would do every microsecond for the next 5 years. A parsimonious representation of this formula would stitch together simpler formulae that operate on disjoint, exhaustive sub-intervals of the next 5 years. So in a sense even a "perfect formula" would be made up of models that expire.
I believe there are ways of meta-modelling just as there are ways of modelling — but there is no formula for the stock market because some determinants of the price are exogenous. Meta-models are imperfect for the same reason models are imperfect.
The map is not the territory, any model is an abstraction and will never be complete, the only complete model of the market is the market itself, and so on. I agree that this leads directly into Gödel, Turing, the halting problem, and other basic computability concepts.
Try this thought experiment: Imagine the market as a turing machine named M, reading and writing to a tape of infinite length. All past and future news events are on the tape. The market (our machine named M) reads a news event, and, based on its internal state and ruleset, writes a market data message to the tape. Because of the halting problem etc., we can't use a different machine to predict ahead of time what the next market data message will be. The only way to discover the next market data message is to execute M's next machine cycle.
Trying to write an exact model of the market is like trying to duplicate M's entire tape, including future news events, as well as M's internal ruleset. And before we can begin executing our own model, we also need to grab a copy of M's internal state, which, in the case of a real-world market, includes the internal state of all counterparties. Any turing machine that can do all of that is, in fact, identical to M. We would have to recreate the entire market, including future news, in order to model M.
If we try to write a macro-level model G that summarizes M's output, aggregating by timeframe, index, or somesuch, then we still run into the same problem: Macromarket G is just another machine, still with an infinite tape. We can assume that G's behavior is determined by a simpler ruleset, with smaller internal state than M. That's the whole point of a model. But again, in a real-world market, that internal state is stored amongst the counterparties, and the tape still includes future news events. And the next (aggregated) market data message is still not decideable without executing G's next machine cycle.
We can model this several other ways. For instance, instead of one big machine M or G, we can use a bunch of smaller machines, each representing a counterparty, all sharing the same tape. That just makes the complexity worse.
Of course, Turing only claimed that a turing machine could compute any problem that was decideable by machine. A real-world market includes players who are not machines. But even if we were to assume that humans are mechanistic in behavior, we'd still have the intractibility of getting a copy of their internal state in order to model M or G.
In reality we already know that no model is completely accurate, and no model's accuracy is constant over time. The above thought experiments, I think, might be able to illustrate some of the reasons why.
If we take causal determinism as a given, the behaviour of markets is in the end just physics. In principle you could simulate everything, including uncertainties due to the uncertainty-princple and obtain true expected values. This means that there is a perfect model for the markets. Any other model, if it agrees on the market values of interest for all given inputs, can be validated against this perfect model.
Gödel's incompletenes theorem states that given a set of axioms there will always be statements about the natural numbers which cannot be shown to be true or false. This is independent of available computing power.
To compare: given sufficient computing power it is possible to check a given market model, in contrast there are axiomatic systems where it is not possible to check a given statement for correctnes. That is the reason why I wouldn't expect intuitively an anlogon to Gödel's theorem for the markets.