E suggested that, when our high level predictions are particularly certain (corresponding to the psycholinguistic construct of pre-updating),Lang Cogn Neurosci. Author manuscript; available in PMC 2017 January 01.Kuperberg and JaegerPageand the bottom-up turns out to be incompatible with this high-certainty inference, this will lead to additional neural processing, which might reflect adaptation. In the psycholinguistics literature, the constructs we considered in this review have sometimes been discussed as being qualitatively different to one another. For example, the predictability of information in a context has sometimes been viewed as distinct from preactivation, and predictive pre-activation has sometimes been viewed as being distinct from pre-updating. Here, however, we have argued that these constructs may be linked by appealing to a hierarchical, dynamic and actively generative framework of AprotininMedChemExpress Aprotinin language comprehension, in which the comprehender’s goal is to infer, with as much certainty as possible, the message-level interpretation or situation model that the producer intends to communicate, at a rate that allows her to keep up with the speed at which the linguistic signal unfolds. Within this framework, this goal is achieved through incremental cycles of belief updating (Bayesian inference) at multiple levels of representation — the highest message-level representation, as well as at all the levels below that allow the comprehender to achieve her specific goal. We have also suggested that the comprehender actively propagate beliefs/ predictions down to successively lower levels of representation (corresponding to predictive pre-activation) in order to minimize expected Bayesian surprise for each new bottom-up input. In this way, when new bottom-up input is encountered, any Bayesian surprise at these lower level representations will be less than if the comprehender had not predictively preactivated at all. Finally, we have suggested that, by weighting the degree of updating by her estimates of relative reliabilities of her priors and likelihoods at any given level of representation, a comprehender who has bounded resources can achieve this goal more efficiently, quickly and flexibly. Thus, within this type of actively generative framework, prediction is not simply an `add-on’ that aids the recognition of bottom-up input; it plays a pivotal role in driving higher level inference: the goal of comprehension itself. Of course, there is much work to be done in formalizing and implementing this framework. By adopting a probabilistic framework and discussing the role of prediction in language comprehension at Marr’s computational level analysis, we are not claiming that the brain literally computes probabilities, but rather that it may be possible to describe what it is computing in probabilistic terms. In addition, as has sometimes been pointed out, we are consciously aware of only one experience (or, in the case of language, one interpretation) at any one time (see Jackendoff, 1987, pages 115-119, for discussion). It will therefore be important to understand how such probabilistic LurbinectedinMedChemExpress Lurbinectedin inference drives our (conscious) comprehension of language (for one theory in the perceptual domain, see Hohwy, Roepstorff Friston, 2008, and discussion by Clark, 2013, page 184-185). It is also important to note that constructs such as Bayesian surprise can be instantiated in many different ways at the algorithmic and neural levels. For example, key compon.E suggested that, when our high level predictions are particularly certain (corresponding to the psycholinguistic construct of pre-updating),Lang Cogn Neurosci. Author manuscript; available in PMC 2017 January 01.Kuperberg and JaegerPageand the bottom-up turns out to be incompatible with this high-certainty inference, this will lead to additional neural processing, which might reflect adaptation. In the psycholinguistics literature, the constructs we considered in this review have sometimes been discussed as being qualitatively different to one another. For example, the predictability of information in a context has sometimes been viewed as distinct from preactivation, and predictive pre-activation has sometimes been viewed as being distinct from pre-updating. Here, however, we have argued that these constructs may be linked by appealing to a hierarchical, dynamic and actively generative framework of language comprehension, in which the comprehender’s goal is to infer, with as much certainty as possible, the message-level interpretation or situation model that the producer intends to communicate, at a rate that allows her to keep up with the speed at which the linguistic signal unfolds. Within this framework, this goal is achieved through incremental cycles of belief updating (Bayesian inference) at multiple levels of representation — the highest message-level representation, as well as at all the levels below that allow the comprehender to achieve her specific goal. We have also suggested that the comprehender actively propagate beliefs/ predictions down to successively lower levels of representation (corresponding to predictive pre-activation) in order to minimize expected Bayesian surprise for each new bottom-up input. In this way, when new bottom-up input is encountered, any Bayesian surprise at these lower level representations will be less than if the comprehender had not predictively preactivated at all. Finally, we have suggested that, by weighting the degree of updating by her estimates of relative reliabilities of her priors and likelihoods at any given level of representation, a comprehender who has bounded resources can achieve this goal more efficiently, quickly and flexibly. Thus, within this type of actively generative framework, prediction is not simply an `add-on’ that aids the recognition of bottom-up input; it plays a pivotal role in driving higher level inference: the goal of comprehension itself. Of course, there is much work to be done in formalizing and implementing this framework. By adopting a probabilistic framework and discussing the role of prediction in language comprehension at Marr’s computational level analysis, we are not claiming that the brain literally computes probabilities, but rather that it may be possible to describe what it is computing in probabilistic terms. In addition, as has sometimes been pointed out, we are consciously aware of only one experience (or, in the case of language, one interpretation) at any one time (see Jackendoff, 1987, pages 115-119, for discussion). It will therefore be important to understand how such probabilistic inference drives our (conscious) comprehension of language (for one theory in the perceptual domain, see Hohwy, Roepstorff Friston, 2008, and discussion by Clark, 2013, page 184-185). It is also important to note that constructs such as Bayesian surprise can be instantiated in many different ways at the algorithmic and neural levels. For example, key compon.