• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Tuesday, June 10, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Machine Learning

Technique of Moments Estimation with Python Code

Admin by Admin
February 13, 2025
in Machine Learning
0
0 Yhsw5cieqo3mjqlx.png
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Mastering SQL Window Capabilities | In the direction of Information Science

Tips on how to Design My First AI Agent


Let’s say you might be in a buyer care heart, and also you wish to know the chance distribution of the variety of calls per minute, or in different phrases, you wish to reply the query: what’s the chance of receiving zero, one, two, … and so on., calls per minute? You want this distribution so as to predict the chance of receiving completely different variety of calls based mostly on which you’ll be able to plan what number of staff are wanted, whether or not or not an growth is required, and so on.

In an effort to let our resolution ‘knowledge knowledgeable’ we begin by gathering knowledge from which we attempt to infer this distribution, or in different phrases, we wish to generalize from the pattern knowledge to the unseen knowledge which is often known as the inhabitants in statistical phrases. That is the essence of statistical inference.

From the collected knowledge we will compute the relative frequency of every worth of calls per minute. For instance, if the collected knowledge over time seems one thing like this: 2, 2, 3, 5, 4, 5, 5, 3, 6, 3, 4, … and so on. This knowledge is obtained by counting the variety of calls obtained each minute. In an effort to compute the relative frequency of every worth you may depend the variety of occurrences of every worth divided by the overall variety of occurrences. This manner you’ll find yourself with one thing just like the gray curve within the beneath determine, which is equal to the histogram of the info on this instance.

Picture generated by the Writer

Another choice is to imagine that every knowledge level from our knowledge is a realization of a random variable (X) that follows a sure chance distribution. This chance distribution represents all of the attainable values which can be generated if we had been to gather this knowledge lengthy into the longer term, or in different phrases, we will say that it represents the inhabitants from which our pattern knowledge was collected. Moreover, we will assume that every one the info factors come from the identical chance distribution, i.e., the info factors are identically distributed. Furthermore, we assume that the info factors are unbiased, i.e., the worth of 1 knowledge level within the pattern is just not affected by the values of the opposite knowledge factors. The independence and an identical distribution (iid) assumption of the pattern knowledge factors permits us to proceed mathematically with our statistical inference drawback in a scientific and easy method. In additional formal phrases, we assume {that a} generative probabilistic mannequin is accountable for producing the iid knowledge as proven beneath.

Picture generated by the Writer

On this specific instance, a Poisson distribution with imply worth λ = 5 is assumed to have generated the info as proven within the blue curve within the beneath determine. In different phrases, we assume right here that we all know the true worth of λ which is usually not recognized and must be estimated from the info.

Picture generated by the Writer

Versus the earlier technique wherein we needed to compute the relative frequency of every worth of calls per minute (e.g., 12 values to be estimated on this instance as proven within the gray determine above), now we solely have one parameter that we purpose at discovering which is λ. One other benefit of this generative mannequin strategy is that it’s higher when it comes to generalization from pattern to inhabitants. The assumed chance distribution will be mentioned to have summarized the info in a sublime method that follows the Occam’s razor precept.

Earlier than continuing additional into how we purpose at discovering this parameter λ, let’s present some Python code first that was used to generate the above determine.

# Import the Python libraries that we'll want on this article
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import math
from scipy import stats

# Poisson distribution instance
lambda_ = 5
sample_size = 1000
data_poisson = stats.poisson.rvs(lambda_,measurement= sample_size) # generate knowledge

# Plot the info histogram vs the PMF
x1 = np.arange(data_poisson.min(), data_poisson.max(), 1)
fig1, ax = plt.subplots()
plt.bar(x1, stats.poisson.pmf(x1,lambda_),
        label="Possion distribution (PMF)",coloration = BLUE2,linewidth=3.0,width=0.3,zorder=2)
ax.hist(data_poisson, bins=x1.measurement, density=True, label="Knowledge histogram",coloration = GRAY9, width=1,zorder=1,align='left')

ax.set_title("Knowledge histogram vs. Poisson true distribution", fontsize=14, loc="left")
ax.set_xlabel('Knowledge worth')
ax.set_ylabel('Chance')
ax.legend()
plt.savefig("Possion_hist_PMF.png", format="png", dpi=800)

Our drawback now’s about estimating the worth of the unknown parameter λ utilizing the info we collected. That is the place we’ll use the technique of moments (MoM) strategy that seems within the title of this text.

First, we have to outline what is supposed by the second of a random variable. Mathematically, the kth second of a discrete random variable (X) is outlined as follows:

Take the primary second E(X) for example, which can also be the imply μ of the random variable, and assuming that we gather our knowledge which is modeled as N iid realizations of the random variable X. An affordable estimate of μ is the pattern imply which is outlined as follows:

Thus, so as to acquire a MoM estimate of a mannequin parameter that parametrizes the chance distribution of the random variable X, we first write the unknown parameter as a operate of a number of of the kth moments of the random variable, then we exchange the kth second with its pattern estimate. The extra unknown parameters we now have in our fashions, the extra moments we want.

In our Poisson mannequin instance, that is quite simple as proven beneath.

Within the subsequent half, we take a look at our MoM estimator on the simulated knowledge we had earlier. The Python code for acquiring the estimator and plotting the corresponding chance distribution utilizing the estimated parameter is proven beneath.

# Technique of moments estimator utilizing the info (Poisson Dist)
lambda_hat = sum(data_poisson) / len(data_poisson)

# Plot the MoM estimated PMF vs the true PMF
x1 = np.arange(data_poisson.min(), data_poisson.max(), 1)
fig2, ax = plt.subplots()
plt.bar(x1, stats.poisson.pmf(x1,lambda_hat),
        label="Estimated PMF",coloration = ORANGE1,linewidth=3.0,width=0.3)
plt.bar(x1+0.3, stats.poisson.pmf(x1,lambda_),
        label="True PMF",coloration = BLUE2,linewidth=3.0,width=0.3)

ax.set_title("Estimated Poisson distribution vs. true distribution", fontsize=14, loc="left")
ax.set_xlabel('Knowledge worth')
ax.set_ylabel('Chance')
ax.legend()
#ax.grid()
plt.savefig("Possion_true_vs_est.png", format="png", dpi=800)

The beneath determine exhibits the estimated distribution versus the true distribution. The distributions are fairly shut indicating that the MoM estimator is an affordable estimator for our drawback. In reality, changing expectations with averages within the MoM estimator implies that the estimator is a constant estimator by the legislation of enormous numbers, which is an efficient justification for utilizing such estimator.

Picture generated by the Writer

One other MoM estimation instance is proven beneath assuming the iid knowledge is generated by a standard distribution with imply μ and variance σ² as proven beneath.

Picture generated by the Writer

On this specific instance, a Gaussian (regular) distribution with imply worth μ = 10 and σ = 2 is assumed to have generated the info. The histogram of the generated knowledge pattern (pattern measurement = 1000) is proven in gray within the beneath determine, whereas the true distribution is proven within the blue curve.

Picture generated by the Writer

The Python code that was used to generate the above determine is proven beneath.

# Regular distribution instance
mu = 10
sigma = 2
sample_size = 1000
data_normal = stats.norm.rvs(loc=mu, scale=sigma ,measurement= sample_size) # generate knowledge

# Plot the info histogram vs the PDF
x2 = np.linspace(data_normal.min(), data_normal.max(), sample_size)
fig3, ax = plt.subplots()
ax.hist(data_normal, bins=50, density=True, label="Knowledge histogram",coloration = GRAY9)
ax.plot(x2, stats.norm(loc=mu, scale=sigma).pdf(x2),
        label="Regular distribution (PDF)",coloration = BLUE2,linewidth=3.0)

ax.set_title("Knowledge histogram vs. true distribution", fontsize=14, loc="left")
ax.set_xlabel('Knowledge worth')
ax.set_ylabel('Chance')
ax.legend()
ax.grid()

plt.savefig("Normal_hist_PMF.png", format="png", dpi=800)

Now, we wish to use the MoM estimator to search out an estimate of the mannequin parameters, i.e., μ and σ² as proven beneath.

In an effort to take a look at this estimator utilizing our pattern knowledge, we plot the distribution with the estimated parameters (orange) within the beneath determine, versus the true distribution (blue). Once more, it may be proven that the distributions are fairly shut. After all, so as to quantify this estimator, we have to take a look at it on a number of realizations of the info and observe properties similar to bias, variance, and so on. Such essential elements have been mentioned in an earlier article.

Picture generated by the Writer

The Python code that was used to estimate the mannequin parameters utilizing MoM, and to plot the above determine is proven beneath.

# Technique of moments estimator utilizing the info (Regular Dist)
mu_hat = sum(data_normal) / len(data_normal) # MoM imply estimator
var_hat = sum(pow(x-mu_hat,2) for x in data_normal) / len(data_normal) # variance
sigma_hat = math.sqrt(var_hat)  # MoM commonplace deviation estimator

# Plot the MoM estimated PDF vs the true PDF
x2 = np.linspace(data_normal.min(), data_normal.max(), sample_size)
fig4, ax = plt.subplots()
ax.plot(x2, stats.norm(loc=mu_hat, scale=sigma_hat).pdf(x2),
        label="Estimated PDF",coloration = ORANGE1,linewidth=3.0)
ax.plot(x2, stats.norm(loc=mu, scale=sigma).pdf(x2),
        label="True PDF",coloration = BLUE2,linewidth=3.0)

ax.set_title("Estimated Regular distribution vs. true distribution", fontsize=14, loc="left")
ax.set_xlabel('Knowledge worth')
ax.set_ylabel('Chance')
ax.legend()
ax.grid()
plt.savefig("Normal_true_vs_est.png", format="png", dpi=800)

One other helpful chance distribution is the Gamma distribution. An instance for the appliance of this distribution in actual life was mentioned in a earlier article. Nonetheless, on this article, we derive the MoM estimator of the Gamma distribution parameters α and β as proven beneath, assuming the info is iid.

Picture generated by the Writer

On this specific instance, a Gamma distribution with α = 6 and β = 0.5 is assumed to have generated the info. The histogram of the generated knowledge pattern (pattern measurement = 1000) is proven in gray within the beneath determine, whereas the true distribution is proven within the blue curve.

Picture generated by the Writer

The Python code that was used to generate the above determine is proven beneath.

# Gamma distribution instance
alpha_ = 6 # form parameter
scale_ = 2 # scale paramter (lamda) = 1/beta in gamma dist.
sample_size = 1000
data_gamma = stats.gamma.rvs(alpha_,loc=0, scale=scale_ ,measurement= sample_size) # generate knowledge

# Plot the info histogram vs the PDF
x3 = np.linspace(data_gamma.min(), data_gamma.max(), sample_size)
fig5, ax = plt.subplots()
ax.hist(data_gamma, bins=50, density=True, label="Knowledge histogram",coloration = GRAY9)
ax.plot(x3, stats.gamma(alpha_,loc=0, scale=scale_).pdf(x3),
        label="Gamma distribution (PDF)",coloration = BLUE2,linewidth=3.0)

ax.set_title("Knowledge histogram vs. true distribution", fontsize=14, loc="left")
ax.set_xlabel('Knowledge worth')
ax.set_ylabel('Chance')
ax.legend()
ax.grid()
plt.savefig("Gamma_hist_PMF.png", format="png", dpi=800)

Now, we wish to use the MoM estimator to search out an estimate of the mannequin parameters, i.e., α and β, as proven beneath.

In an effort to take a look at this estimator utilizing our pattern knowledge, we plot the distribution with the estimated parameters (orange) within the beneath determine, versus the true distribution (blue). Once more, it may be proven that the distributions are fairly shut.

Picture generated by the Writer

The Python code that was used to estimate the mannequin parameters utilizing MoM, and to plot the above determine is proven beneath.

# Technique of moments estimator utilizing the info (Gamma Dist)
sample_mean = data_gamma.imply()
sample_var = data_gamma.var()
scale_hat = sample_var/sample_mean #scale is the same as 1/beta in gamma dist.
alpha_hat = sample_mean**2/sample_var

# Plot the MoM estimated PDF vs the true PDF
x4 = np.linspace(data_gamma.min(), data_gamma.max(), sample_size)
fig6, ax = plt.subplots()

ax.plot(x4, stats.gamma(alpha_hat,loc=0, scale=scale_hat).pdf(x4),
        label="Estimated PDF",coloration = ORANGE1,linewidth=3.0)
ax.plot(x4, stats.gamma(alpha_,loc=0, scale=scale_).pdf(x4),
        label="True PDF",coloration = BLUE2,linewidth=3.0)

ax.set_title("Estimated Gamma distribution vs. true distribution", fontsize=14, loc="left")
ax.set_xlabel('Knowledge worth')
ax.set_ylabel('Chance')
ax.legend()
ax.grid()
plt.savefig("Gamma_true_vs_est.png", format="png", dpi=800)

Notice that we used the next equal methods of writing the variance when deriving the estimators within the instances of Gaussian and Gamma distributions.

Conclusion

On this article, we explored numerous examples of the tactic of moments estimator and its functions in several issues in knowledge science. Furthermore, detailed Python code that was used to implement the estimators from scratch in addition to to plot the completely different figures can also be proven. I hope that you will see that this text useful.


Tags: CodeEstimationMethodMomentsPython

Related Posts

Wf into.jpg
Machine Learning

Mastering SQL Window Capabilities | In the direction of Information Science

June 10, 2025
Image 7.png
Machine Learning

Tips on how to Design My First AI Agent

June 9, 2025
Photo 1533575988569 5d0786b24c67 scaled 1.jpg
Machine Learning

Why AI Initiatives Fail | In the direction of Knowledge Science

June 8, 2025
1 azd7xqettxd3em1k0q5ska.webp.webp
Machine Learning

Not Every little thing Wants Automation: 5 Sensible AI Brokers That Ship Enterprise Worth

June 7, 2025
Default image.jpg
Machine Learning

Constructing a Fashionable Dashboard with Python and Gradio

June 5, 2025
Conf matrix.png
Machine Learning

Pairwise Cross-Variance Classification | In direction of Information Science

June 4, 2025
Next Post
14d54ec4 5140 4735 Bf01 6a909c9f0439 800x420.jpg

Coinbase engages with Indian regulators, eyes market re-entry after year-long hiatus

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024

EDITOR'S PICK

1zodxj5iio9ijh9kdqq 6yw.jpeg

Information Worth Lineage, which means ultimately? | by Marc Delbaere | Aug, 2024

August 5, 2024
Image D877abfd35ef9f77c558cf4f206d6d0a Scaled.jpg

Public Belief in AI-Powered Facial Recognition Programs

March 14, 2025
Bitget Id D10c4574 4163 47ea 8dc4 4df884ea0118 Size900.jpg

Bitget Joins Forces with KoinX to Simplify Crypto Tax Reporting

August 26, 2024
Gamer Laptops Scaled.jpg

Information-Pushed Tricks to Select the Good Gamer Laptop computer

October 30, 2024

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Promoting Your Facet Mission? 10 Marketplaces Information Scientists Must Know
  • Functions of Density Estimation to Authorized Principle
  • Mastering SQL Window Capabilities | In the direction of Information Science
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?