• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Thursday, December 25, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

Utilizing NumPy to Analyze My Day by day Habits (Sleep, Display Time & Temper)

Admin by Admin
October 28, 2025
in Artificial Intelligence
0
Screenshot 2025 10 28 103945.jpg
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Retaining Possibilities Sincere: The Jacobian Adjustment

The Machine Studying “Creation Calendar” Day 24: Transformers for Textual content in Excel


a small NumPy undertaking sequence the place I attempt to truly construct one thing with NumPy as an alternative of simply going by random features and documentation. I’ve at all times felt that the easiest way to be taught is by doing, so on this undertaking, I wished to create one thing each sensible and private.

The concept was easy: analyze my every day habits — sleep, examine hours, display time, train, and temper — and see how they have an effect on my productiveness and basic well-being. The information isn’t actual; it’s fictional, simulated over 30 days. However the aim isn’t the accuracy of the info — it’s studying how you can use NumPy meaningfully.

So let’s stroll by the method step-by-step.

Step 1 — Loading and Understanding the Knowledge

I began by making a easy NumPy array that contained 30 rows (one for every day) and 6 columns — every column representing a distinct behavior metric. Then I saved it as a .npy file so I may simply load it later.

# TODO: Import NumPy and cargo the .npy knowledge file
import numpy as np
knowledge = np.load(‘activity_data.npy’)

As soon as loaded, I wished to verify that the whole lot appeared as anticipated. So I checked the form (to know what number of rows and columns there have been) and the variety of dimensions (to verify it’s a 2D desk, not a 1D record).

# TODO: Print array form, first few rows, and so forth.
knowledge.form
knowledge.ndim

OUTPUT: 30 rows, 6 columns, and ndim=2

I additionally printed out the primary few rows simply to visually affirm that every worth appeared positive — as an illustration, that sleep hours weren’t unfavourable or that the temper values had been inside an inexpensive vary.

# TODO: Prime 5 rows
knowledge[:5]

Output:

array([[ 1. , 6.5, 5. , 4.2, 20. , 6. ],
[ 2. , 7.2, 6. , 3.1, 35. , 7. ],
[ 3. , 5.8, 4. , 5.5, 0. , 5. ],
[ 4. , 8. , 7. , 2.5, 30. , 8. ],
[ 5. , 6. , 5. , 4.8, 10. , 6. ]])

Step 2 — Validating the Knowledge

Earlier than doing any evaluation, I wished to ensure the info made sense. It’s one thing we frequently skip when working with fictional knowledge, nevertheless it’s nonetheless good follow.

So I checked:

  • No unfavourable sleep hours
  • No temper scores lower than 1 or larger than 10

For sleep, that meant choosing the sleep column (index 1 in my array) and checking if any values had been beneath zero.

# Be sure that values are affordable (no unfavourable sleep)
knowledge[:, 1] < 0

Output:

array([False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False])

This implies no negatives. Then I did the identical for temper. I counted to search out that the temper column was at index 5, and checked if any had been beneath 1 or above 10.

# Is temper out of vary?
knowledge[:, 5] < 1
knowledge[:, 5] > 10

We acquired the identical output.

Every part appeared good, so we may transfer on.

Step 3 — Splitting the Knowledge into Weeks

I had 30 days of knowledge, and I wished to investigate it week by week. The primary intuition was to make use of NumPy’s cut up() operate, however that failed as a result of 30 isn’t evenly divisible by 4. So as an alternative, I used np.array_split(), which permits uneven splits.

That gave me:

  • Week 1 → 8 days
  • Week 2 → 8 days
  • Week 3 → 7 days
  • Week 4 → 7 days
# TODO: Slice knowledge into week 1, week 2, week 3, week 4
weekly_data = np.array_split(knowledge, 4)
weekly_data

Output:

[array([[ 1. , 6.5, 5. , 4.2, 20. , 6. ],
[ 2. , 7.2, 6. , 3.1, 35. , 7. ],
[ 3. , 5.8, 4. , 5.5, 0. , 5. ],
[ 4. , 8. , 7. , 2.5, 30. , 8. ],
[ 5. , 6. , 5. , 4.8, 10. , 6. ],
[ 6. , 7.5, 6. , 3.3, 25. , 7. ],
[ 7. , 8.2, 3. , 6.1, 40. , 7. ],
[ 8. , 6.3, 4. , 5. , 15. , 6. ]]),

array([[ 9. , 7. , 6. , 3.2, 30. , 7. ],
[10. , 5.5, 3. , 6.8, 0. , 5. ],
[11. , 7.8, 7. , 2.9, 25. , 8. ],
[12. , 6.1, 5. , 4.5, 15. , 6. ],
[13. , 7.4, 6. , 3.7, 30. , 7. ],
[14. , 8.1, 2. , 6.5, 50. , 7. ],
[15. , 6.6, 5. , 4.1, 20. , 6. ],
[16. , 7.3, 6. , 3.4, 35. , 7. ]]),

array([[17. , 5.9, 4. , 5.6, 5. , 5. ],
[18. , 8.3, 7. , 2.6, 30. , 8. ],
[19. , 6.2, 5. , 4.3, 10. , 6. ],
[20. , 7.6, 6. , 3.1, 25. , 7. ],
[21. , 8.4, 3. , 6.3, 40. , 7. ],
[22. , 6.4, 4. , 5.1, 15. , 6. ],
[23. , 7.1, 6. , 3.3, 30. , 7. ]]),

array([[24. , 5.7, 3. , 6.7, 0. , 5. ],
[25. , 7.9, 7. , 2.8, 25. , 8. ],
[26. , 6.2, 5. , 4.4, 15. , 6. ],
[27. , 7.5, 6. , 3.5, 30. , 7. ],
[28. , 8. , 2. , 6.4, 50. , 7. ],
[29. , 6.5, 5. , 4.2, 20. , 6. ],
[30. , 7.4, 6. , 3.6, 35. , 7. ]])]

Now the info was in 4 chunks, and I may simply analyze each individually.

Step 4 — Calculating Weekly Metrics

I wished to get a way of how every behavior modified from week to week. So I targeted on 4 most important issues:

  • Common sleep
  • Common examine hours
  • Common display time
  • Common temper rating

I saved every week’s array in a separate variable, then used np.imply() to calculate the averages for every metric.

Common sleep hours

# retailer into variables
week_1 = weekly_data[0]
week_2 = weekly_data[1]
week_3 = weekly_data[2]
week_4 = weekly_data[3]

# TODO: Compute common sleep
week1_avg_sleep = np.imply(week_1[:, 1])
week2_avg_sleep = np.imply(week_2[:, 1])
week3_avg_sleep = np.imply(week_3[:, 1])
week4_avg_sleep = np.imply(week_4[:, 1])

Common examine hours

# TODO: Compute common examine hours
week1_avg_study = np.imply(week_1[:, 2])
week2_avg_study = np.imply(week_2[:, 2])
week3_avg_study = np.imply(week_3[:, 2])
week4_avg_study = np.imply(week_4[:, 2])

Common display time

# TODO: Compute common display time
week1_avg_screen = np.imply(week_1[:, 3])
week2_avg_screen = np.imply(week_2[:, 3])
week3_avg_screen = np.imply(week_3[:, 3])
week4_avg_screen = np.imply(week_4[:, 3])

Common temper rating

# TODO: Compute common temper rating
week1_avg_mood = np.imply(week_1[:, 5])
week2_avg_mood = np.imply(week_2[:, 5])
week3_avg_mood = np.imply(week_3[:, 5])
week4_avg_mood = np.imply(week_4[:, 5])

Then, to make the whole lot simpler to learn, I formatted the outcomes properly.

# TODO: Show weekly outcomes clearly
print(f”Week 1 — Common sleep: {week1_avg_sleep:.2f} hrs, Examine: {week1_avg_study:.2f} hrs, “
f”Display time: {week1_avg_screen:.2f} hrs, Temper rating: {week1_avg_mood:.2f}”)

print(f”Week 2 — Common sleep: {week2_avg_sleep:.2f} hrs, Examine: {week2_avg_study:.2f} hrs, “
f”Display time: {week2_avg_screen:.2f} hrs, Temper rating: {week2_avg_mood:.2f}”)

print(f”Week 3 — Common sleep: {week3_avg_sleep:.2f} hrs, Examine: {week3_avg_study:.2f} hrs, “
f”Display time: {week3_avg_screen:.2f} hrs, Temper rating: {week3_avg_mood:.2f}”)

print(f”Week 4 — Common sleep: {week4_avg_sleep:.2f} hrs, Examine: {week4_avg_study:.2f} hrs, “
f”Display time: {week4_avg_screen:.2f} hrs, Temper rating: {week4_avg_mood:.2f}”)

Output:

Week 1 – Common sleep: 6.94 hrs, Examine: 5.00 hrs, Display time: 4.31 hrs, Temper rating: 6.50
Week 2 – Common sleep: 6.97 hrs, Examine: 5.00 hrs, Display time: 4.39 hrs, Temper rating: 6.62
Week 3 – Common sleep: 7.13 hrs, Examine: 5.00 hrs, Display time: 4.33 hrs, Temper rating: 6.57
Week 4 – Common sleep: 7.03 hrs, Examine: 4.86 hrs, Display time: 4.51 hrs, Temper rating: 6.57

Step 5 — Making Sense of the Outcomes

As soon as I printed out the numbers, some patterns began to indicate up.

My sleep hours had been fairly regular for the primary two weeks (round 6.9 hours), however in week three, they jumped to round 7.1 hours. Meaning I used to be “sleeping higher” because the month went on. By week 4, it stayed roughly round 7.0 hours.

For examine hours, it was the alternative. Week one and two had a median of round 5 hours per day, however by week 4, it had dropped to about 4 hours. Principally, I began off sturdy however slowly misplaced momentum — which, actually, sounds about proper.

Then got here display time. This one damage a bit. In week one, it was roughly 4.3 hours per day, and it simply stored creeping up each week. The traditional cycle of being productive early on, then slowly drifting into extra “scrolling breaks” later within the month.

Lastly, there was temper. My temper rating began at round 6.5 in week one, went barely as much as 6.6 in week two, after which sort of hovered there for the remainder of the interval. It didn’t transfer dramatically, nevertheless it was fascinating to see a small spike in week two — proper earlier than my examine hours dropped and my display time elevated.

To make issues interactive, I assumed it’d be nice to visualise utilizing matplotlib.

Weekly Habit Trends Over 30 Days

Step 6 — Searching for Patterns

Now that I had the numbers, I wished to know why my temper went up in week two.

So I in contrast the weeks facet by facet. Week two had first rate sleep, excessive examine hours, and comparatively low display time in comparison with the later weeks.

That may clarify why my temper rating peaked there. By week three, despite the fact that I slept extra, my examine hours had began to dip — possibly I used to be resting extra however getting much less carried out, which didn’t increase my temper as a lot as I anticipated.

That is what I appreciated in regards to the undertaking: it’s not in regards to the knowledge being actual, however about how one can use NumPy to discover patterns, relationships, and small insights. Even fictional knowledge can inform a narrative while you have a look at it the precise manner.

Step 7 — Wrapping Up and Subsequent Steps

On this little undertaking, I discovered a couple of key issues — each about NumPy and about structuring evaluation like this.

We began with a uncooked array of fictional every day habits, discovered how you can verify its construction and validity, cut up it into significant chunks (weeks), after which used easy NumPy operations to investigate every section.

It’s the sort of small undertaking that reminds you that knowledge evaluation doesn’t at all times should be complicated. Typically it’s nearly asking easy questions like “How is my display time altering over time?” or “When do I really feel the perfect?”

If I wished to take this additional (which I most likely will), there are such a lot of instructions to go:

  • Discover the greatest and worst days general
  • Examine weekdays vs weekends
  • And even create a easy “wellbeing rating” primarily based on a number of habits mixed

However that’ll most likely be for the subsequent a part of the sequence.

For now, I’m glad that I acquired to use NumPy to one thing that feels actual and relatable — not simply summary arrays and numbers, however habits and feelings. That’s the sort of studying that sticks.

Thanks for studying.

In case you’re following together with the sequence, strive recreating this by yourself fictional knowledge. Even when your numbers are random, the method will educate you how you can slice, cut up, and analyze arrays like a professional.

Tags: AnalyzeDailyHabitsMoodNumPyScreenSleeptime

Related Posts

Image 1 1.jpg
Artificial Intelligence

Retaining Possibilities Sincere: The Jacobian Adjustment

December 25, 2025
Transformers for text in excel.jpg
Artificial Intelligence

The Machine Studying “Creation Calendar” Day 24: Transformers for Textual content in Excel

December 24, 2025
1d cnn.jpg
Artificial Intelligence

The Machine Studying “Introduction Calendar” Day 23: CNN in Excel

December 24, 2025
Blog2.jpeg
Artificial Intelligence

Cease Retraining Blindly: Use PSI to Construct a Smarter Monitoring Pipeline

December 23, 2025
Gradient boosted linear regression.jpg
Artificial Intelligence

The Machine Studying “Creation Calendar” Day 20: Gradient Boosted Linear Regression in Excel

December 22, 2025
Img 8465 scaled 1.jpeg
Artificial Intelligence

How I Optimized My Leaf Raking Technique Utilizing Linear Programming

December 22, 2025
Next Post
Sdc featured scaled.jpg

How Knowledge Analytics Is Remodeling eCommerce Funds

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Chainlink Link And Cardano Ada Dominate The Crypto Coin Development Chart.jpg

Chainlink’s Run to $20 Beneficial properties Steam Amid LINK Taking the Helm because the High Creating DeFi Challenge ⋆ ZyCrypto

May 17, 2025
Image 100 1024x683.png

Easy methods to Use LLMs for Highly effective Computerized Evaluations

August 13, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
Blog.png

XMN is accessible for buying and selling!

October 10, 2025
0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025

EDITOR'S PICK

Fdf25205 62a4 4ac8 9952 42aaa9ec1e6e 800x420.jpg

Saylor urges Microsoft to ditch bonds, purchase Bitcoin to keep away from destroying capital

May 7, 2025
019b291a 637c 7532 9ef8 37c3b2926fdd.jpg

Over 100 Crypto ETPs May Launch In 2026: Bitwise

December 17, 2025
Futu securities brings solana retail trading to hong kong.webp.webp

Futu Securities Brings Solana Retail Buying and selling to Hong Kong

August 18, 2025
Chips Semiconductors Shutterstock 2137865295.jpg

Information Bytes 20250505: Japan’s Rapidus 2nm Chips, $7T Knowledge Heart Forecast, NVIDIA and Commerce Restrictions, ‘Godfather of AI’ Points Warning

May 5, 2025

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • 5 Rising Tendencies in Information Engineering for 2026
  • Why MAP and MRR Fail for Search Rating (and What to Use As a substitute)
  • Retaining Possibilities Sincere: The Jacobian Adjustment
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?