Many biological decision-making processes can be viewed as performing a classification task over a set of inputs, using various chemical and physical processes as "biological hardware." In this context, it is important to understand the inherent limitations on the computational expressivity of classification functions instantiated in biophysical media. Here, we model biochemical networks as Markov jump processes and train them to perform classification tasks, allowing us to investigate their computational expressivity. We reveal several unanticipated limitations on the input-output functions of these systems, which we further show can be lifted using biochemical mechanisms like promiscuous binding. We analyze the flexibility and sharpness of decision boundaries as well as the classification capacity of these networks. Additionally, we identify distinctive signatures of networks trained for classification, including the emergence of correlated subsets of spanning trees and a creased "energy landscape" with multiple basins. Our findings have implications for understanding and designing physical computing systems in both biological and synthetic chemical settings.