This is a recent paper submitted to the Journal of the Audio Engineering Society. In this paper, we take word embeddings, and map them directly onto EQ parameters, using a Fully-Connected Neural Network. We show that a neural network can learn equaliser settings for completely unknown words, which produce EQ results that are both intutive, and perceptually sound plausable. Further subjective evaluations are required to validate these results, but in principal, the idea of mapping semantic word descriptors directly onto any audio effect parameters. This approach could be developed in the future, rolled out to a number of different semantic approaches to create a suite of semantically driven audio effects.