Animals process information about many stimulus features simultaneously, swiftly (in a few 100 ms), and robustly (even when individual neurons do not themselves respond reliably). When the brain carries, codes, and certainly when it decodes information, it must do so through some coarse-grained projection mechanism. How can a projection retain information about network dynamics that covers multiple features, swiftly and robustly? Here, by a coarse-grained projection to event trees and to the event chains that comprise these trees, we propose a method of characterizing dynamic information of neuronal networks by using a statistical collection of spatial-temporal sequences of relevant physiological observables (such as sequences of spiking multiple neurons). We demonstrate, through idealized point neuron simulations in small networks, that this event tree analysis can reveal, with high reliability, information about multiple stimulus features within short realistic observation times. Then, with a large-scale realistic computational model of V1, we show that coarse-grained event trees contain sufficient information, again over short observation times, for fine discrimination of orientation, with results consistent with recent experimental observation.