Offensive or antagonistic language targeted at individuals and social groups based on their personal characteristics (also known as cyber hate speech or cyber hate) has been frequently posted and widely circulated via the World Wide Web. This can be considered as a key risk factor for individual and societal tension linked to regional instability. Automated Web-based cyber hate detection is important for observing and understanding community and regional societal tension - especially in online social networks where posts can be rapidly and widely viewed and disseminated. While previous work has involved using lexicons, bags-of-words or probabilistic language parsing approaches, they often suffer from a similar issue which is that cyber hate can be subtle and indirect - thus depending on the occurrence of individual words or phrases can lead to a significant number of false negatives, providing inaccurate representation of the trends in cyber hate. This problem motivated us to challenge thinking around the representation of subtle language use, such as references to perceived threats from the other including immigration or job prosperity in a hateful context. We propose a novel framework that utilises language use around the concept of othering and intergroup threat theory to identify these subtleties and we implement a novel classification method using embedding learning to compute semantic distances between parts of speech considered to be part of an othering narrative. To validate our approach we conduct several experiments on different types of cyber hate, namely religion, disability, race and sexual orientation, with F-measure scores for classifying hateful instances obtained through applying our model of 0.93, 0.86, 0.97 and 0.98 respectively, providing a significant improvement in classifier accuracy over the state-of-the-art.