Object categorization is investigated in this work by comparing the performance of self-organizing neural networks and human participants. Our aim is to test the hypothesis that human observers cannot ignore semantic knowledge when they are asked to categorize on the basis of structural-perceptual information only. Participants and Kohonen networks were submitted to three different categorization tasks. Assuming that the networks represent the "ideal subjects", i.e., ones which are able to perform the task in a transparent way by separately processing semantic and structural information, we compared networks' and participants' results with a non-metric multi-dimensional scaling technique. The results demonstrate that the networks can provide a coherent categorical organization of the objects, and that participants cannot ignore semantic information even when explicitly asked to do so. The latter result provides support to the hypothesis that perceptual and semantic information may co-occur at the same level and to the notion of an early and automatic access to semantic knowledge.