Last year, I was scrolling through my Zerodha app when something clicked in my head. The app was recommending stocks based on my watch history. Not because some analyst sat down and coded rules for every possible scenario — but because it had *learned* from patterns in thousands of users' behaviour.
That's when it hit me: Machine Learning isn't this mystical thing reserved for PhD students at IIT. It's literally everywhere. Your Netflix suggestions. CRED's cashback predictions. PhonePe's fraud detection. Even your bank's decision to approve or reject your loan application.
But here's the problem — most people (including me, for a while) use these products without actually understanding *how* they work. We treat it like magic. And when you don't understand something, you either fear it or blindly trust it. Neither is great.
So I spent the last few months actually digging into this. Not the heavy math — the *logic*. And honestly? It's simpler than you think. Let me break it down the way I wish someone had explained it to me.
Machine Learning Isn't Actually Learning (And That's the Point)
Here's the first thing to get straight: ML isn't "learning" in the way your brain learns. Your brain learns concepts, builds intuition, makes creative connections. A machine learning model? It's finding patterns in data. Pure pattern recognition.
Think of it this way. I have a friend who can identify if a mango is ripe just by touch and smell. That's intuition. But if I gave you a spreadsheet with 10,000 rows of mangoes — their weight, firmness rating, smell score, sugar content, and whether they were ripe or not — you could *eventually* figure out the formula: "If firmness is less than 5 AND sugar content is above 12%, it's ripe."
That's literally what ML does. Except instead of mangoes, it's processing millions of data points. And instead of you manually writing the formula, the algorithm figures out the pattern automatically.
The Old Way vs. The ML Way
Before ML became mainstream, programmers would write explicit rules. Your bank's loan approval system would have been coded like:
"If salary > 5,00,000 AND employment years > 3 AND existing debt < 2,00,000, approve the loan."
Sounds logical, right? But it's brittle. What if someone earns ₹4,50,000 but has a perfect repayment history? The rule says no, but they're actually a safe bet.
With ML, you don't write rules. You feed the system examples of past loan applications — who got approved, who defaulted, who paid on time. The algorithm learns: "Hey, there's a pattern here. Repayment history matters more than raw salary. And someone's age combined with their job stability is a better predictor than either alone."
It's flexible. It adapts. And it gets better with more data.
How It Actually Works (No Math, I Promise)
Let me walk you through a real scenario. Let's say you're building an app that predicts if a cricket match will be a high-scoring or low-scoring one.
Step 1: Gather Historical Data
First, you collect data from past matches. For each match, you note:
- Ground (Wankhede, Arun Jaitley, etc.)
- Time of day (day vs. night)
- Weather (humidity, temperature)
- Pitch condition (grass coverage, hardness)
- Teams playing
- Total runs scored
You gather this for 5,000 past matches.
Step 2: Train the Model
Here's where the "learning" happens. You split your 5,000 matches into two groups:
- Training set (4,000 matches): The algorithm studies these. It finds patterns. "When humidity is high at Wankhede in evening matches, scores tend to be lower. When pitch has grass coverage..."
- Test set (1,000 matches): You don't show these to the algorithm while it's learning. These are hidden until the end.
During training, the algorithm makes guesses. "Based on humidity and pitch, I predict 160+ runs." Then it checks the actual result. If it got it right, great. If not, it adjusts its thinking slightly. It does this thousands of times, each time getting a *tiny bit* better.
This is called iteration. It's not rocket science — it's just repeated adjustment.
Step 3: Test and Evaluate
Once trained, you show it the 1,000 hidden test matches and ask: "Predict the score." If it gets 800 out of 1,000 right, your accuracy is 80%. That's actually pretty solid.
If it's terrible (like 45% accuracy), you either feed it more data, adjust how the algorithm works, or add more useful information (like player form, opposition strength, etc.).
Step 4: Deploy and Keep Learning
Once satisfied, you ship it. And here's the cool part — the model doesn't stop learning. Every time a new match happens, that data can be fed back in. The model gets continuously better.
This is why Netflix recommendations improve the more you use it. Why your UPI fraud detection gets sharper every month. The model is always learning from fresh data.
The Three Flavours of Machine Learning (And When They're Actually Used)
Now, ML has different flavours depending on what problem you're solving. Let me break down the main three that you've probably encountered without realizing it.
Supervised Learning (The Most Common)
You give the algorithm labeled examples. "Here's a photo of a ripe mango" (label: ripe). "Here's an unripe one" (label: unripe). The algorithm learns the difference.
Real-world uses:
- Email spam detection (this email is spam, that one isn't)
- Your bank predicting your credit score (approved vs. rejected loans from history)
- Google Photos tagging your friend's face (trained on millions of labeled faces)
And honestly? This is 90% of what's deployed in the real world. Especially in Indian fintech apps.
Unsupervised Learning (The Detective)
Here, you DON'T give labels. You just throw data at the algorithm and ask: "Find patterns. Group similar things together."
Your Spotify "Discover Weekly" playlist? Partly unsupervised learning. The algorithm groups people with similar listening habits, even though nobody explicitly labeled what's "similar."
Banks use this to detect fraud. They find clusters of unusual behaviour that don't match normal patterns. No one had to tell the system "this is fraud" — it figured it out by finding outliers.
Reinforcement Learning (The Video Game)
The algorithm learns by trial and error, getting rewards or penalties for actions. Like training a dog with treats.
This is less common in apps you use daily, but it powers recommendation engines that optimize for engagement (more views = reward). And it's the tech behind AlphaGo (the AI that beat Lee Sedol at chess).
| Type | What You Do | Real Example |
|---|---|---|
| Supervised | Label examples (right/wrong) | Loan approval, spam detection |
| Unsupervised | No labels — find patterns | Customer segmentation, fraud detection |
| Reinforcement | Trial and error with rewards | Game AI, optimization |
Where ML Is Changing India (And Where It's Not Yet)
Here's what honestly surprised me while researching this — ML adoption in India is weirdly uneven. Some sectors are light-years ahead. Others are still stuck in 1995.
Where it's thriving: Fintech is absolutely crushing it. Every major app — Zerodha, CRED, Paytm, PhonePe — uses ML for fraud detection, credit scoring, and personalization. Insurance companies are using it to predict claims. E-commerce platforms (Flipkart, Amazon India) use it for recommendations and inventory optimization.
Where it's barely started: Agriculture, healthcare, education. These sectors have massive data available and could benefit hugely, but adoption is slow. Partly infrastructure, partly because ML requires good data quality and these sectors often have messy, incomplete records.
And here's the thing I want you to understand — ML isn't a magic wand that solves every problem. It's a tool. A really powerful one, but a tool nonetheless. It works best when:
- You have tons of historical data
- Patterns exist in that data (not random)
- You can clearly define what "success" means
- The problem doesn't require human judgment or creativity
That's why it's brilliant at fraud detection. And mediocre at predicting the next big stock breakout (too many variables, humans interfere too much).
The Realistic Risks and Limitations
And honestly? This is where I get a bit frustrated with how tech companies market ML. They make it sound like this perfect oracle. It's not.
Models have bias. If you train a model on historical loan data where banks systematically rejected women applicants, the model will learn that bias. It'll be racist, sexist, casteist — whatever your training data was. This is a real problem in India where historical data reflects deep structural inequalities.
They need constant updates. A model trained in 2022 might perform terribly in 2024 if the world changed. (See: ML models during COVID vs. after. They just broke because patterns shifted.)
They're black boxes sometimes. You feed in features, get a prediction, but the "why" is unclear. This is fine for Netflix recommendations. Less fine for loan decisions that affect someone's life.
They fail on new situations. A model trained to identify spam email will struggle when spammers evolve tactics. It's seen *patterns in history*, not logic that adapts to the future.
Final Thoughts
Here's what I want you to take away from this.
Machine Learning isn't magic. It's not scary or mystical. It's a practical tool that finds patterns in data and uses those patterns to make predictions or decisions. It's powerful because data is everywhere, and patterns emerge from that data faster than humans could ever find them manually.
But it's not perfect. It's only as good as the data fed into it. It can be biased. It breaks when the world changes. And it requires constant maintenance and updates.
As someone who works with data and fintech daily, my honest take is this: ML is going to be a core skill — not just for engineers, but for anyone in tech, finance, or business. Not because you need to build models, but because you need to *understand* them. To know when to trust them. When to question them. How to use them responsibly.
And the fact that you're reading this and trying to understand it? That already puts you ahead of most people. Most folks will just use these apps and never think about how they work.
You now know better. You know it's patterns. You know it's iterative. You know it's not perfect but often pretty good. And you know that understanding how the tools shaping your financial life work is worth the time investment.
That's a solid foundation. From here, if you want to go deeper — explore how to build models, dive into specific algorithms, understand the math — you can. But you don't need to. Understanding the logic is enough to be dangerous in a good way.
Curious what problems at your job could be solved with ML? Or how your favourite app is using it? Think about it. The mental model you now have — it applies everywhere.
Written by Dattatray Dagale • 11 May 2026
0 Comments