Contextual Embeddings and Modern Representations
Overview
In our previous lesson, we explored traditional word embeddings like Word2Vec, GloVe, and FastText. While these approaches revolutionized NLP, they share a fundamental limitation: they assign the same vector to a word regardless of its context. The word "bank" has the same representation whether it refers to a financial institution or a river edge.
This lesson introduces contextual embeddings - dynamic representations that change based on the surrounding context. These models have dramatically improved performance across NLP tasks by capturing nuanced word usage and semantic relationships.
Learning Objectives
After completing this lesson, you will be able to:
- Understand the limitations of static word embeddings
- Explain how contextual embedding models like ELMo and BERT work
- Recognize the architectural innovations that enable context-sensitivity
- Compare different contextual embedding approaches
- Understand multimodal embeddings like CLIP
- Apply contextual embeddings to practical NLP tasks
The Need for Context
The Polysemy Problem
Many words have multiple meanings (polysemy) that traditional embeddings cannot distinguish:
- "bank" → financial institution or river edge
- "spring" → season, water source, coiled metal, or a jump
- "light" → not heavy, not dark, or to ignite
Analogy: Fixed vs. Adaptive Identity
Imagine if you had a single, unchanging photo ID that had to represent you in all contexts - professional, casual, formal, athletic, etc. This would fail to capture different aspects of your identity that emerge in different settings.
Contextual embeddings are like having a dynamic ID that adapts to show the most relevant version of you for each specific situation.
From Static to Dynamic Representations
The Evolution of Word Representations
Let's visualize how word representations have evolved over time:
ELMo: Embeddings from Language Models
ELMo (Embeddings from Language Models), introduced by Peters et al. in 2018, was the first major contextual embedding model to gain widespread use.
Key Innovation
ELMo uses a bidirectional LSTM trained on a language modeling objective. The embeddings are derived from all internal states of the LSTM, not just the final layer.
Architecture
- Character-level convolutional neural network to handle out-of-vocabulary words
- Multiple layers of bidirectional LSTMs
- Weighted combination of representations from different layers
Mathematical Formulation
For a word in context, ELMo creates a representation:
Where:
- is the contextual representation from the -th layer
- are softmax-normalized weights
- is a scaling parameter
- is the number of layers
Layer Specialization
Different layers capture different types of information:
- Lower layers capture syntactic information (part of speech, word structure)
- Higher layers capture semantic information (word sense, context-specific meaning)
Visualizing ELMo's Contextual Representations
The following visualization shows how ELMo represents the word "bank" differently in various contexts:
BERT: Bidirectional Encoder Representations from Transformers
BERT, introduced by Devlin et al. in 2018, represented a major leap forward by using transformer architecture instead of LSTMs.
Key Innovations
- Bidirectional attention: Words attend to both left and right context simultaneously
- Masked language modeling: Predicts randomly masked tokens using bidirectional context
- Next sentence prediction: Models relationship between sentence pairs
- Transfer learning: Pre-train once, fine-tune for various tasks
Architecture
BERT uses the transformer encoder architecture with:
- Input embeddings (token + position + segment)
- Multiple layers of self-attention and feed-forward networks
- Layer normalization and residual connections
Pre-training Tasks
- Masked Language Model (MLM): Randomly mask 15% of tokens and predict them
- Next Sentence Prediction (NSP): Given two sentences, predict if the second follows the first
BERT Variants
- BERT-base: 12 layers, 768 hidden units, 12 attention heads (110M parameters)
- BERT-large: 24 layers, 1024 hidden units, 16 attention heads (340M parameters)
- Multilingual BERT: Trained on 104 languages
- Domain-specific BERTs: BioBERT (biomedical), SciBERT (scientific), FinBERT (financial)
Visualizing BERT Attention
This visualization shows how BERT's attention mechanism works with an example sentence:
RoBERTa and Improvements on BERT
RoBERTa (Robustly Optimized BERT Approach) improved BERT by:
- Training longer with more data
- Removing the Next Sentence Prediction objective
- Using dynamic masking patterns
- Using larger batches
- Using a larger byte-level BPE vocabulary
These changes led to significant performance improvements, showing that BERT was underfit rather than fundamentally limited.
The Embedding Benchmarking Revolution
MTEB: Massive Text Embedding Benchmark
The MTEB evaluates embedding models across:
- Retrieval tasks: Finding relevant documents for a query
- Classification tasks: Assigning texts to categories
- Clustering tasks: Grouping similar texts
- Similarity tasks: Measuring semantic similarity
- Reranking tasks: Reordering retrieved documents by relevance
- Summarization tasks: Creating concise summaries of text
- Pair classification tasks: Determining relationships between text pairs
MTEB Leaderboard Performance
Sentence-BERT: Efficient Sentence Embeddings
Sentence-BERT (SBERT) modified the BERT architecture to efficiently generate sentence embeddings that can be compared using cosine similarity.
Key Innovations
- Siamese and triplet network structures for training
- Mean pooling over token embeddings
- Contrastive learning objectives
Practical Applications
- Semantic search
- Clustering
- Semantic textual similarity
- Information retrieval
Code Example: Using Sentence Transformers
1 from sentence_transformers import SentenceTransformer, util 2
3 # Load model 4 model = SentenceTransformer('all-MiniLM-L6-v2') 5
6 # Prepare sentences 7 sentences = [ 8 "This is an example sentence.", 9 "Each sentence is converted to a vector.", 10 "Sentences with similar meanings have similar vectors.", 11 "This sentence is like the first one." 12 ] 13
14 # Encode sentences to get embeddings 15 embeddings = model.encode(sentences) 16
17 # Calculate cosine similarity between first and all sentences 18 cosine_scores = util.cos_sim(embeddings[0], embeddings) 19
20 print("Similarity scores with the first sentence:") 21 for i, score in enumerate(cosine_scores[0]): 22 print(f"Sentence {i+1}: {score:.4f}")
Beyond Text: CLIP and Multimodal Embeddings
CLIP (Contrastive Language-Image Pre-training) by OpenAI represents a breakthrough in connecting text and images in the same embedding space.
How CLIP Works
- Train two encoders: one for images (ViT or ResNet) and one for text (Transformer)
- Learn to maximize similarity between correct image-text pairs
- Minimize similarity for incorrect pairs
Contrastive Pre-training
CLIP uses a contrastive objective with large batch sizes (32,768 image-text pairs). For batch , the loss is:
Where:
- is the cosine similarity between image and text embeddings
- is a temperature parameter
- is the batch size
CLIP Applications
- Zero-shot image classification
- Cross-modal retrieval (find images from text, text from images)
- Visual question answering
- Image generation guidance (DALL-E, Stable Diffusion)
Visualizing CLIP's Joint Embedding Space
State-of-the-Art Embedding Models
E5 Family (Microsoft)
The E5 (Empirical Embeddings) models top the MTEB leaderboard with innovations:
- Unsupervised pre-training with weakly supervised contrastive learning
- Self-teaching with hard negative mining
- Multi-stage training process
BGE (BAAI)
The BGE (BAAI General Embedding) models from the Beijing Academy of Artificial Intelligence:
- Custom hard negative mining strategy
- Diverse training data selection
- Adversarial training techniques
GTE (Alibaba)
The GTE (General Text Embeddings) models feature:
- Curriculum learning approach
- Multi-stage contrastive learning
- Domain-specific fine-tuning
Why Are Contextual Embeddings Better?
Contextual embeddings outperform static embeddings in most NLP tasks for several key reasons:
1. Word Sense Disambiguation
Example Sentence | Word2Vec Representation | BERT Representation |
---|---|---|
"The bank approved my loan application." | Single vector for 'bank' | Financial institution context |
"I sat on the bank of the river." | Same vector as above | River edge context |
"Please bank the fire before leaving." | Same vector as above | Verb 'to cover' context |
This table illustrates a key advantage of contextual embeddings: while traditional models like Word2Vec assign the same vector regardless of usage, models like BERT create distinct representations based on context.
2. Handling Polysemy and Homonyms
Contextual models can distinguish different meanings of the same word form:
- "I used a bat to hit the ball" vs. "The bat flew into the cave"
- "The bass guitar needs tuning" vs. "I caught a bass in the lake"
3. Capturing Syntactic Roles
The same word can serve different syntactic functions, which contextual models capture:
- "Time flies like an arrow" ('flies' as verb)
- "Fruit flies like a banana" ('flies' as noun)
4. Handling Co-reference
Contextual models excel at understanding what pronouns refer to:
- "The trophy didn't fit in the suitcase because it was too large" (what was large?)
5. Incorporating World Knowledge
Pre-training on massive text corpora imbues contextual models with factual knowledge:
- Capital cities, famous people, historical events
- Common sense relationships and properties
Practical Applications of Contextual Embeddings
Semantic Search
1 from transformers import AutoTokenizer, AutoModel 2 import torch 3 import torch.nn.functional as F 4
5 # Load model and tokenizer 6 tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') 7 model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') 8
9 # Function to get embeddings 10 def get_embeddings(text_list): 11 # Tokenize input texts 12 encoded_input = tokenizer(text_list, padding=True, truncation=True, return_tensors='pt') 13 14 # Compute token embeddings 15 with torch.no_grad(): 16 model_output = model(**encoded_input) 17 18 # Mean pooling 19 sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) 20 21 # Normalize embeddings 22 sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) 23 24 return sentence_embeddings 25
26 def mean_pooling(model_output, attention_mask): 27 token_embeddings = model_output[0] 28 input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() 29 return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) 30
31 # Example corpus 32 corpus = [ 33 "Deep learning is a subfield of machine learning.", 34 "Python is a popular programming language for AI.", 35 "Neural networks are inspired by the human brain.", 36 "Natural language processing deals with text data.", 37 "Computer vision focuses on image recognition tasks." 38 ] 39
40 # Query 41 query = "How do artificial neural networks work?" 42
43 # Get embeddings 44 corpus_embeddings = get_embeddings(corpus) 45 query_embedding = get_embeddings([query]) 46
47 # Calculate similarities 48 similarities = F.cosine_similarity(query_embedding, corpus_embeddings) 49
50 # Display results 51 print(f"Query: {query}\n") 52 print("Top matches:") 53 for i in torch.argsort(similarities, descending=True): 54 print(f"{similarities[0][i].item():.4f}: {corpus[i]}")
Zero-shot Classification
1 from transformers import pipeline 2
3 # Load zero-shot classification pipeline 4 classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli") 5
6 # Example text 7 text = "The restaurant food was absolutely wonderful and the service was excellent." 8
9 # Candidate labels 10 candidate_labels = ["positive", "negative", "neutral"] 11
12 # Classify 13 result = classifier(text, candidate_labels) 14
15 print(f"Text: {text}") 16 print("\nClassification Results:") 17 for label, score in zip(result['labels'], result['scores']): 18 print(f"{label}: {score:.4f}")
Limitations and Challenges
Despite their power, contextual embeddings still face several challenges:
- Computational Cost: Larger models require significant resources
- Tokenization Limitations: Suboptimal handling of rare words and code-switching
- Context Window Size: Limited ability to capture very long-range dependencies
- Bias and Fairness: Models can inherit and amplify biases from training data
- Interpretability: Black-box nature makes it hard to understand why models make certain predictions
The Future of Embeddings
Emerging Trends
- Model Compression: Distillation and pruning to create smaller, faster models
- Multimodal Embeddings: Beyond text-image to include audio, video, and structured data
- Long-Context Models: Extending context windows to handle book-length content
- Task-Specific Adaptations: Specialized embeddings for specific domains and applications
- Unified Representations: Single models handling multiple modalities and tasks
The Efficiency Revolution
Recent advances focus on creating more efficient embeddings:
- FID (Fast Intent Detection): 120x faster inference with minimal quality loss
- E5-Small: Competitive performance with much smaller model size
- Embedding models with quantization for mobile and edge devices
Summary
In this lesson, we've covered:
- The evolution from static to contextual embeddings
- Key models: ELMo, BERT, RoBERTa, and their variants
- Multimodal embeddings with CLIP
- Evaluation benchmarks like MTEB
- Practical applications of contextual embeddings
- Future directions in embedding research
Contextual embeddings have dramatically transformed NLP by capturing the nuanced, context-dependent nature of language. While traditional embeddings opened the door to modern NLP, contextual models have pushed capabilities far beyond what was previously possible.
In our next lesson, we'll explore pre-transformer models like RNNs, LSTMs, and GRUs, which were the state-of-the-art before the transformer revolution that enabled today's contextual embeddings.
Practice Exercises
-
Contextual Analysis:
- Compare how BERT and Word2Vec handle ambiguous words in different contexts
- Visualize the difference using dimensionality reduction techniques
-
Embedding-Based Semantic Search:
- Build a simple semantic search engine using Sentence-BERT
- Compare its performance with keyword-based search
-
Zero-Shot Classification:
- Implement a zero-shot classifier using pre-trained embeddings
- Evaluate its performance on a dataset without fine-tuning
-
Cross-Lingual Embeddings:
- Explore how multilingual models handle translation and cross-lingual tasks
- Test semantic similarity across languages
Additional Resources
- ELMo: Deep Contextualized Word Representations
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- RoBERTa: A Robustly Optimized BERT Pretraining Approach
- CLIP: Learning Transferable Visual Models From Natural Language Supervision
- MTEB: Massive Text Embedding Benchmark
- Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
- Hugging Face Transformers Documentation
- The Illustrated BERT