According to OpenAI, artificial intelligence (AI) has learned sentiment. However, it cannot express it. Nevertheless, it can read it.
The system has been termed an “unsupervised sentiment neuron.” It develops a good representation of sentiment through only prediction of the next character in a text of Amazon reviews.
“A linear model using this representation achieves state-of-the-art sentiment analysis accuracy on a small but extensively-studied dataset, the Stanford Sentiment Treebank (we get 91.8% accuracy versus the previous best of 90.2%), and can match the performance of previous supervised systems using 30-100x fewer labelled examples.”
There appears to be a sentiment neuron within the system with containment of most of the signals relevant to sentiment. It is reported as a derivation from large neural networks. A property emerging as a result of the structure and nature of neural networks.
“We first trained a multiplicative LSTM with 4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text. Training took one month across four NVIDIA Pascal GPUs, with our model processing 12,500 characters per second.”
These were used as the foundation for the creation of a sentiment classifier: different types of sentiment. Each were weighted in linear combinations. Weighting is giving more or less value to something: X was more weighted than Y; Y was less weighted than X.
“While training the linear model with L1 regularisation, we noticed it used surprisingly few of the learned units. Digging in, we realised there actually existed a single “sentiment neuron” that’s highly predictive of the sentiment value.”
Sentiment became predictable from one value, mostly. This neuron can classify reviews as positive or negative based on the Amazon review system. It was dynamic, adaptable, and adjustable “on a character-by-character basis.”
Typically, computers, algorithms, and AIs need big data to sift for self-learning. Unsupervised learning is different. This AI can do it. It can learn a good representation of a dataset, which can then be used to “solve tasks using only a few labelled examples.”
According to the researchers, the findings “implies that simply training large unsupervised next-step-prediction models on large amounts of data may be a good approach to use when creating systems with good representation learning capabilities.”
The researchers concluded that outside of the specific unsupervised learning, the capacity for “general unsupervised representation learning” could become a reality.
“Our results suggest that there exist settings where very large next-step-prediction models learn excellent unsupervised representations. Training a large neural network to predict the next frame in a large collection of videos may result in unsupervised representations for object, scene, and action classifiers.”
Scott Douglas Jacobsen is the Founder of In-Sight: Independent Interview-Based Journal and In-Sight Publishing. Jacobsen works for science and human rights, especially women’s and children’s rights. He considers the modern scientific and technological world the foundation for the provision of the basics of human life throughout the world and advancement of human rights as the universal movement among peoples everywhere.