Context-Aware Semantic Similarity Measurement for Unsupervised Word Sense Disambiguation
-
Updated
Mar 7, 2025 - Python
Context-Aware Semantic Similarity Measurement for Unsupervised Word Sense Disambiguation
An Empirical Evaluation of Word Embedding Models for Subjectivity Analysis Tasks
This GitHub repository contains implementations of three popular word embedding techniques: Singular Value Decomposition (SVD), Continuous Bag of Words (CBOW), and Embeddings from Language Models (ELMO). Word embeddings are a fundamental component of natural language processing and are essential for various text-based machine learning tasks.
pretrained transformer and embeddings language models
An implementation of ELMo embeddings using PyTorch, featuring stacked Bi-LSTMs for contextualized word representations. Pretrained on bidirectional language modeling and evaluated on the AG News dataset for text classification, achieving metrics like accuracy, F1, and precision.
This repository contains implementations of ELMo (Embeddings from Language Models) models trained on a news dataset. Additionally, it includes a classification task using ELMo embeddings.
This project aims to implement various word embedding methods.
Intro to NLP Assignments on Word2vec, Smoothing, ELMO, POS Tagging
embeddings language models
Add a description, image, and links to the elmo-embedding topic page so that developers can more easily learn about it.
To associate your repository with the elmo-embedding topic, visit your repo's landing page and select "manage topics."