The aim of the present study was to determine differences in cortical processing of consonant-vowel syllables and acoustically matched non-speech sounds, as well as novel human and nonhuman sounds. Event-related potentials (ERPs) were recorded to vowel, vowel duration, consonant, syllable intensity, and frequency changes as well as corresponding changes in their non-speech counterparts with the multi-feature mismatch negativity (MMN) paradigm. Enhanced responses to linguistically relevant deviants were expected. Indeed, the vowel and frequency deviants elicited significantly larger MMNs in the speech than non-speech condition. Minimum-norm source localization algorithm was applied to determine hemispheric asymmetry in the responses. Language relevant deviants (vowel, duration and - to a lesser degree - frequency) showed higher activation in the left than right hemisphere to stimuli in the speech condition. Novel sounds elicited novelty P3 waves, the amplitude of which for nonhuman sounds was larger in the speech than non-speech condition. The current MMN results imply enhanced processing of linguistically relevant information at the pre-attentive stage and in this way support the domain-specific model of speech perception.