• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Monday, August 11, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Machine Learning

Demystifying Cosine Similarity | In the direction of Information Science

Admin by Admin
August 10, 2025
in Machine Learning
0
Himesh kumar behera t11oyf1k8ka unsplash scaled 1.jpg
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Easy methods to Write Insightful Technical Articles

Agentic AI: On Evaluations | In direction of Knowledge Science


is a generally used metric for operationalizing duties comparable to semantic search and doc comparability within the subject of pure language processing (NLP). Introductory NLP programs typically present solely a high-level justification for utilizing cosine similarity in such duties (versus, say, Euclidean distance) with out explaining the underlying arithmetic, leaving many knowledge scientists with a slightly obscure understanding of the subject material. To deal with this hole, the next article lays out the mathematical instinct behind the cosine similarity metric and exhibits how this can assist us interpret ends in apply with hands-on examples in Python.

Word: All figures and formulation within the following sections have been created by the writer of this text.

Mathematical Instinct 

The cosine similarity metric is predicated on the cosine operate that readers might recall from highschool math. The cosine operate displays a repeating wavelike sample, a full cycle of which is depicted in Determine 1 beneath for the vary 0 <= x <= 2*pi. The Python code used to provide the determine can also be included for reference.

import numpy as np
import matplotlib.pyplot as plt

# Outline the x vary from 0 to 2*pi
x = np.linspace(0, 2 * np.pi, 500)
y = np.cos(x)

# Create the plot
plt.determine(figsize=(8, 4))
plt.plot(x, y, label='cos(x)', coloration='blue')

# Add notches on the x-axis at pi/2 and three*pi/2
notch_positions = [0, np.pi/2, np.pi, 3*np.pi/2, 2*np.pi]
notch_labels = ['0', 'pi/2', 'pi', '3*pi/2', '2*pi']
plt.xticks(ticks=notch_positions, labels=notch_labels)

# Add customized horizontal gridlines solely at y = -1, 0, 1
for y_val in [-1, 0, 1]:
    plt.axhline(y=y_val, coloration='grey', linestyle='--', linewidth=0.5)

# Add vertical gridlines at specified x-values
for x_val in notch_positions:
    plt.axvline(x=x_val, coloration='grey', linestyle='--', linewidth=0.5)

# Customise the plot
plt.xlabel("x")
plt.ylabel("cos(x)")

# Last structure and show
plt.tight_layout()
plt.present()
Determine 1: Cosine Perform

The operate parameter x denotes an angle in radians (e.g., the angle between two vectors in an embedding house), the place pi/2, pi, 3*pi/2, and a couple of*pi, are 90, 180, 270, and 360 levels, respectively.

To know why the cosine operate can function a helpful foundation for designing a vector similarity metric, discover that the fundamental cosine operate, with none practical transformations as proven in Determine 1, has maxima at x = 2*a*pi, minima at x = (2*b + 1)*pi, and roots at x = (c + 1/2)*pi for some integers a, b, and c. In different phrases, if x denotes the angle between two vectors, cos(x) returns the most important worth when the vectors level in the identical path, the smallest worth when the vectors level in reverse instructions, and 0 when the vectors are orthogonal to one another.

This habits of the cosine operate neatly captures the interaction between two key ideas in NLP: semantic overlap (conveying how a lot that means is shared between two texts) and semantic polarity (capturing the oppositeness of that means in texts). For instance, the texts “I appreciated this film” and “I loved this movie” would have excessive semantic overlap (they categorical basically the identical that means regardless of utilizing completely different phrases) and low semantic polarity (they don’t categorical reverse meanings). Now, if the embedding vectors for 2 phrases occur to encode each semantic overlap and polarity, then we’d anticipate synonyms to have cosine similarity approaching 1, antonyms to have cosine similarity approaching -1, and unrelated phrases to have cosine similarity approaching 0.

In apply, we’ll usually not know the angle x straight. As an alternative, we should derive the cosine worth from the vectors themselves. Given two vectors U and V, every with n components, the cosine of the angle between these vectors — equal to the cosine similarity metric — is computed because the dot product of the vectors divided by the product of the vector magnitudes:

The above method for the cosine of the angle between two vectors might be derived from the so-called Cosine Rule, as demonstrated within the section between minutes 12 and 18 of this video:

A neat proof of the Cosine Rule itself is offered on this video:

The next Python implementation of cosine similarity explicitly operationalizes the formulation offered above, with out counting on any black-box, third-party packages:

import math

def cosine_similarity(U, V):
    if len(U) != len(V):
        increase ValueError("Vectors should be of the identical size.")

    # Compute dot product and magnitudes
    dot_product = sum(u * v for u, v in zip(U, V))
    magnitude_U = math.sqrt(sum(u ** 2 for u in U))
    magnitude_V = math.sqrt(sum(v ** 2 for v in V))
    
    # Zero vector dealing with to keep away from division by zero
    if magnitude_U == 0 or magnitude_V == 0:
        increase ValueError("Can't compute cosine similarity for zero-magnitude vectors.")

    return dot_product / (magnitude_U * magnitude_V)

readers can discuss with this article for a extra environment friendly Python implementation of the cosine distance metric (outlined as 1 minus cosine similarity) utilizing the NumPy and SciPy packages.

Lastly, it’s price evaluating the mathematical instinct of cosine similarity (or distance) with that of Euclidean distance, which measures the linear distance between two vectors and may function a vector similarity metric. Particularly, the decrease the Euclidean distance between two vectors, the upper their semantic similarity is more likely to be. The Euclidean distance between two vectors U and V (every of size n) might be computed utilizing the next method:

Beneath is the corresponding Python implementation:

import math

def euclidean_distance(U, V):
    if len(U) != len(V):
        increase ValueError("Vectors should be of the identical size.")

    # Compute sum of squared variations
    sum_squared_diff = sum((u - v) ** 2 for u, v in zip(U, V))

    # Take the sq. root of the sum
    return math.sqrt(sum_squared_diff)

Discover that, because the elementwise variations within the Euclidean distance method are squared, the ensuing metric will all the time be a non-negative quantity — zero if the vectors are an identical, constructive in any other case. Within the NLP context, this suggests that Euclidean distance won’t mirror semantic polarity in fairly the identical means as cosine distance does. Furthermore, so long as two vectors level in the identical path, the cosine of the angle between them will stay the identical whatever the vector magnitudes. In contrast, the Euclidean distance metric is affected by variations in vector magnitude, which can result in deceptive interpretations in apply (e.g., two texts of various lengths might yield a excessive Euclidean distance regardless of being semantically related). As such, cosine similarity is the popular metric in lots of NLP eventualities, the place figuring out vector — or semantic — directionality is the first concern.

Concept versus Follow

In a sensible NLP state of affairs, the interpretation of cosine similarity hinges on the extent to which the vector embedding encodes polarity in addition to semantic overlap. Within the following hands-on instance, we’ll examine the similarity between two given phrases utilizing a pretrained embedding mannequin that doesn’t encode polarity (all-MiniLM-L6-v2) and one which does (distilbert-base-uncased-finetuned-sst-2-english). We may even use extra environment friendly implementations of cosine similarity and Euclidean distance by leveraging features offered by the SciPy package deal.

from scipy.spatial.distance import cosine as cosine_distance
from sentence_transformers import SentenceTransformer
from transformers import AutoTokenizer, AutoModel
import torch

# Phrases to embed
phrases = ["movie", "film", "good", "bad", "spoon", "car"]

# Load a pre-trained embedding mannequin from Hugging Face
model_1 = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
model_2_name = "distilbert-base-uncased-finetuned-sst-2-english"
model_2_tokenizer = AutoTokenizer.from_pretrained(model_2_name)
model_2 = AutoModel.from_pretrained(model_2_name)

# Generate embeddings for mannequin 1
embeddings_1 =  dict(zip(phrases, model_1.encode(phrases)))

# Generate embeddings for mannequin 2
inputs = model_2_tokenizer(phrases, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
    outputs = model_2(**inputs)
    embedding_vectors_model_2 = outputs.last_hidden_state.imply(dim=1)
embeddings_2 = {phrase: vector for phrase, vector in zip(phrases, embedding_vectors_model_2)}

# Compute and print cosine similarity (1 - cosine distance) for each embedding fashions
print("Cosine similarity for embedding mannequin 1:")
print("film", "t", "movie", "t", 1 - cosine_distance(embeddings_1["movie"], embeddings_1["film"]))
print("good", "t", "unhealthy", "t", 1 - cosine_distance(embeddings_1["good"], embeddings_1["bad"]))
print("spoon", "t", "automotive", "t", 1 - cosine_distance(embeddings_1["spoon"], embeddings_1["car"]))
print()

print("Cosine similarity for embedding mannequin 2:")
print("film", "t", "movie", "t", 1 - cosine_distance(embeddings_2["movie"], embeddings_2["film"]))
print("good", "t", "unhealthy", "t", 1 - cosine_distance(embeddings_2["good"], embeddings_2["bad"]))
print("spoon", "t", "automotive", "t", 1 - cosine_distance(embeddings_2["spoon"], embeddings_2["car"]))
print()

Output:

Cosine similarity for embedding mannequin 1:
film 	 movie 	 0.8426464702276286
good 	 unhealthy 	 0.5871497042685934
spoon 	 automotive 	 0.22919675707817078

Cosine similarity for embedding mannequin 2:
film 	 movie 	 0.9638281550070811
good 	 unhealthy 	 -0.3416433451550165
spoon 	 automotive 	 0.5418748837234599

The phrases “film” and “movie”, that are usually used as synonyms, have cosine similarity near 1, suggesting excessive semantic overlap as anticipated. The phrases “good” and “unhealthy” are antonyms, and we see this mirrored within the adverse cosine similarity consequence when utilizing the second embedding mannequin recognized to encode semantic polarity. Lastly, the phrases “spoon” and “automotive” are semantically unrelated, and the corresponding orthogonality of their vector embeddings is indicated by their cosine similarity outcomes being nearer to zero than for “film” and “movie”.

The Wrap

The cosine similarity between two vectors is predicated on the cosine of the angle they type, and — in contrast to metrics comparable to Euclidean distance — is just not delicate to variations in vector magnitudes. In principle, cosine similarity must be near 1 if the vectors level in the identical path (indicating excessive similarity), near -1 if the vectors level in reverse instructions (indicating excessive dissimilarity), and near 0 if the vectors are orthogonal (indicating unrelatedness). Nonetheless, the precise interpretation of cosine similarity in a given NLP state of affairs is dependent upon the character of the embedding mannequin used to vectorize the textual knowledge (e.g., whether or not the embedding mannequin encodes polarity along with semantic overlap).

Tags: CosineDataDemystifyingScienceSimilarity

Related Posts

Image howtowritetechnicalarticles.jpg
Machine Learning

Easy methods to Write Insightful Technical Articles

August 9, 2025
1 p53uwohxsloxpyc gqxv3g.webp.webp
Machine Learning

Agentic AI: On Evaluations | In direction of Knowledge Science

August 8, 2025
Tds front 1024x683.png
Machine Learning

The Machine, the Skilled, and the Frequent Of us

August 6, 2025
Sse7.png
Machine Learning

Introducing Server-Despatched Occasions in Python | In direction of Information Science

August 5, 2025
Theodore poncet qzephogqd7w unsplash scaled 1.jpg
Machine Learning

Automated Testing: A Software program Engineering Idea Knowledge Scientists Should Know To Succeed

August 4, 2025
Paul weaver nwidmeqsnaq unsplash scaled 1.jpg
Machine Learning

LLMs and Psychological Well being | In direction of Information Science

August 3, 2025
Next Post
Awan 10 github repositories master backend development 1.png

10 GitHub Repositories to Grasp Backend Growth

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024

EDITOR'S PICK

0ln2sc 1uo Bl0b4y.jpeg

Harmonizing and Pooling Datasets for Well being Analysis in R | by Rodrigo M Carrillo Larco, MD, PhD | Jan, 2025

January 22, 2025
Chatgpt Perplexityai Google Goover Explore The Best Gen Ai Research Tools .webp.webp

ChatGPT vs Perplexity vs Google vs Goover

December 12, 2024
Elonmusk 1.jpg

Large X Cyberattack: Right here’s Who’s Accountable

March 12, 2025
1px0p Tzevnfd6ptyd4o7ra.jpeg

The right way to Carry out A/B Testing with Speculation Testing in Python: A Complete Information 🚀 | by Sabrine Bendimerad | Oct, 2024

October 14, 2024

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Cloudera Acquires Taikun for Managing Kubernetes and Cloud
  • How I Gained the “Principally AI” Artificial Knowledge Problem
  • CARV is out there for buying and selling!
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?