close

Artificial Intelligence

Agentic AIArticlesArtificial IntelligenceFeatured

When AI Meets the Cosmos: Special Agents of Meaning — Stellas

Long before algorithms could recognize text or predict outcomes, humans learned to recognize something far more abstract — patterns in the cosmos.

We looked up at the night sky and began connecting stars into constellations, linking the movements of the sun and moon to seasons, and weaving meaning into cycles we could neither control nor fully explain. That was the birth of astrology — not as prophecy, but as humanity’s earliest science of pattern recognition.

It wasn’t about fortune-telling; it was about observation.
Astrology studied relationships — between celestial rhythms and emotional states, between time and transformation. It became a mirror for how humans think: by connecting dots until meaning emerges.


The Oldest System of Intelligence

If you strip away the symbolism, astrology at its core is a data system.
Each birth chart is a structured dataset — coordinates of planets, angles, and timing — interpreted through a framework built on observation, correlation, and meaning.

Thousands of years before “machine learning,” humans were already learning from machines of a different kind — the cosmic kind. The sky was our database; intuition, our first neural net.


A Meeting of Two Worlds

Fast-forward to today.
We now build systems that also recognize patterns — artificial intelligence models that read, reason, and infer.

But while AI seeks precision, astrology seeks perspective. One calculates; the other contemplates.

What if the two could meet?
Could AI reinterpret astrology — not as superstition, but as symbolic data that reveals correlations between our decisions, emotions, and timing?

That question led to the creation of Stellas — a human-augmented AI cosmos.


✨ Introducing Stellas: The Human-Augmented AI Cosmos

Stellas isn’t a horoscope app. It’s a living experiment — a fusion of AI reasoning and ancient symbolism, designed to uncover how cosmic data can reflect life’s rhythms.

Multiple AI agents work together — one analyzes career patterns, another explores emotional cycles, and another studies timing and planetary movements. Together, they simulate what used to take hours of manual analysis, offering deep, data-driven interpretations in seconds.

If you’ve ever asked questions like “Am I on the right path?” or “Is this the right time to make a change?”Stellas gives you an immediate perspective. You enter your birth details, and the AI interprets — not your fate, but your context.


🌙 What You’ll Get in the Stellas Android App

You can now download the Stellas Android app to experience personalized AI-cosmic insights across every area of life:

💼 Career Timing — 12 power windows, salary timing & job switch dates
❤️ Love & Relationships — Attraction peaks, compatibility & deep conversations
💚 Health & Wellness — Energy cycles, workout windows & recovery periods
📅 Daily Guidance — Personalized insights for each day of the year
🌙 Moon Phases — Lunar timing for manifestation and release
Transit Reports — Major planetary movements affecting your chart
💬 Ask Anything — Get instant cosmic answers to your questions

Every feature is designed to merge AI precision with human reflection — not to dictate outcomes, but to reveal possibilities.


It’s About Perspective, Not Prediction

No AI, scientist, or astrologer can promise certainty.
That’s not what Stellas aims for. Its goal is awareness.

Think of it like a weather forecast for life.
If rain is predicted, you carry an umbrella. If it doesn’t rain, you still have the awareness to adapt. Similarly, Stellas helps you anticipate emotional or professional “climates” — not by dictating choices, but by illuminating them.

It’s a reminder that self-awareness is not an algorithmic output — it’s a conscious act of reflection. The insights may not always be exact, but they always offer perspective — and that’s where transformation begins.


The Deeper Layer of Intelligence

When AI starts interpreting symbolic systems like astrology, it moves beyond calculation into something profoundly human — interpretation.

It begins to see how we construct meaning, how we link logic with emotion, and how we turn patterns into stories. In doing so, it mirrors the very process that defines intelligence — the search for significance amid chaos.


A New Kind of Curiosity

Stellas represents this convergence — of logic and language, reason and rhythm.
It’s not about belief; it’s about exploration. It doesn’t replace human intuition; it augments it.

Because the true evolution of AI may not lie in replacing human reasoning — but in expanding it.
When machines begin to study patterns as humans once did, they help us rediscover something timeless: the art of seeing.


My Motivation for Building Stellas

At its heart, Stellas is born from creativity — the desire to explore what happens when technology and imagination collide.

The journey serves two purposes:
1️⃣ To see if today’s copilots and GenAI tools can truly deliver an end-to-end project. (More on this in a later post — because you only realize the gaps when you build something real.)
2️⃣ To take a domain often seen as mystical — astrology — and uncover hidden patterns using the latest in AI.

Whether you believe in astrology or not, there’s something here for everyone to explore.


You can explore Stellas today at stellas.me/promo — no sign-up, no name, just your birth details.
In under a minute, three specialized AI agents will reveal insights about you, your life, and your next milestone.

You might find resonance. You might find surprise.
But you’ll definitely find perspective — a quiet moment of reflection in a world that rarely pauses.


🎧 You can also listen to the latest episode of “Agentic AI: The Future of Intelligent Systems”, which covers this topic in depth — available now on Spotify at https://open.spotify.com/episode/1H5l2YXzIM3mm91968GUqV?si=C9MzC54fQXuFNZDuO77VSA

read more
Agentic AIArticlesArtificial IntelligenceFeaturedGenerative AI

Why LLM Coding Copilots Are Failing to Deliver Real Value

There’s a bold narrative sweeping through the software industry: AI coding assistants will redefine engineering. We’ve all seen the headlines — “All code will soon be AI-generated,” “Developers 10× more productive,” “AI writing most of our applications.”

The promise is seductive. The reality is far more complex.

After building and deploying multiple end-to-end production systems using tools like GitHub Copilot, Claude, Gemini, and OpenAI models, one conclusion stands out clearly:

These tools can generate code — but they cannot engineer software.

They deliver impressive demos and quick wins for isolated snippets, yet struggle the moment they step into real, evolving systems. What follows is not a theoretical analysis, but observations from actual implementation — where productivity meets production.


The Grand Promise

In every major launch, we see AI copilots positioned as game-changers. They can generate boilerplate code, fix bugs, create unit tests, and even build small apps from prompts. The idea of a “developer multiplier” — where one engineer plus AI equals the output of five — has become a central theme in the AI transformation story.

And to be fair, there’s value in the promise. For repetitive coding, documentation, or scaffolding, copilots can genuinely accelerate workflows. They reduce cognitive load for simple, pattern-based tasks. But that’s where the value plateaus.

Because software engineering is not about lines of code — it’s about decisions. Architecture, system design, trade-offs, scalability, resilience, and security — these are not patterns to be predicted; they are choices made with intent. That’s where LLM copilots begin to fail.


The Reality Check

1. Architectural Incoherence

LLMs can generate functional code fragments, but they lack architectural context. In one of my test builds, the AI used three different state-management patterns within the same feature — not by choice, but by confusion. The output “looked right” locally but created an unmaintainable structure when scaled.

A human engineer ensures consistency across modules. The AI, on the other hand, simply mimics whichever pattern appears most statistically probable based on its training data.

2. No System-Level Thinking

Copilots are brilliant at the micro level — single functions or classes — but blind at the macro level. They don’t maintain a mental model of the system. They can’t reason across files or understand interdependencies. In one case, the AI hardcoded configuration and pricing logic directly into multiple functions, ignoring the concept of centralized configuration altogether. It “solved” the local task while breaking scalability and maintainability for the entire application.

3. Error Handling: The Forgotten Path

AI-generated code consistently misses the “unhappy path.” In testing a payment flow, Copilot produced near-perfect happy-path logic — but no retry, no transaction rollback, and no error visibility for partial failures. Exceptions were silently caught and ignored. A production-grade engineer anticipates what happens when things go wrong. LLMs simply don’t — unless explicitly told.

4. Hallucinated Logic

Sometimes, the AI invents logic that seems valid but doesn’t exist. During integration testing, one generated function, appeared out of nowhere. It duplicated another function already in the codebase, slightly modified. This wasn’t human error; it was the model losing context mid-generation. Such hallucinations create debugging chaos later, because the logic seems “plausible,” but it’s not actually wired into the program flow.

5. Blind Spots for Non-Functional Requirements

Performance, security, and scalability don’t feature in an LLM’s predictive scope unless prompted. One AI-generated snippet created a hardcoded retry loop with fixed delays — perfect for small workloads, catastrophic at scale. Another skipped token expiration checks entirely. AI doesn’t “forget” these things — it never knew them. They’re not patterns in code; they’re principles of engineering judgment.


The Hidden Trap: Crowdsourced Thinking

There’s a deeper, subtler problem emerging — LLM copilots make us think in a crowdsourced way. They generate what the majority of the internet has done before — the median of prior knowledge, not the frontier of new ideas.

Ask them to build something with new APIs, unfamiliar frameworks, or original architectures, and they stumble. The AI’s reasoning is rooted in yesterday’s patterns, not tomorrow’s possibilities.

This “averaged intelligence” becomes dangerous for innovation. It recommends complex solutions when simpler ones exist. It follows trends, not insight. For example, when a single API call could solve a use case, the AI might propose a three-layer abstraction pattern because it has seen that in open-source repositories. In other words — it crowdsources your thinking without you realizing it.

This subtle influence can push organizations away from new thinking and toward conventional pattern mimicry. For an industry built on innovation, that’s a quiet regression.


The Missing Holistic Approach

Even when copilots appear to “complete” an app, they miss the essentials that experienced developers never overlook —

  • version upgrades and compatibility,
  • build processes and deployment strategies,
  • logging, monitoring, and performance tuning,
  • dependency management, and
  • security baselines.

These gaps are invisible until the project reaches production. Unless you’ve personally designed, built, deployed, and maintained complex systems, it’s easy to assume the AI has it covered — it doesn’t.

Copilots operate with narrow focus, not holistic awareness. They can code a feature, but they don’t think about the ecosystem the feature lives in. That distinction separates a working prototype from a sustainable system.


The Benchmark Mirage

Benchmarks fuel the illusion of progress. Tests like HumanEval or SWE-Bench showcase impressive accuracy for self-contained coding problems — but that’s not real-world software development. These benchmarks test for correctness of output, not soundness of design.

A Copilot or LLM might pass a functional test while introducing technical debt that explodes months later. Demos show best-case results, not the debugging, rework, and refactoring that follow.

In one real-world scenario, an AI-generated analytics module spammed events continuously, inflating cloud bills by hundreds of dollars. Another assistant, when tested on a live .NET project, repeatedly generated unbuildable pull requests. The tools performed perfectly in the demo — and poorly in deployment.

Benchmarks measure speed. Engineering measures sustainability.


The Large Context Trap

As LLMs evolve, their context windows have expanded dramatically — from a few thousand tokens to millions. On paper, this promises “system-level” understanding: the ability to reason across entire codebases, architectures, and documentation. In practice, it introduces a new illusion of capability.

Having more context is not the same as having more understanding. Even with vast input windows, LLMs still treat information statistically — not structurally. They can see the whole project, but they don’t interpret its intent. The model does not reason about architectural relationships, performance implications, or security dependencies; it merely predicts patterns that appear probable across a larger span of text.

In one real-world experiment, feeding an entire service repository into a long-context model produced elegant summaries and detailed-looking refactors — yet the proposed changes broke key integration contracts. The model recognized syntax and flow, but not system behavior.

The danger of the Large Context Trap is subtle. The illusion of “complete awareness” often convinces teams that the AI now understands their system holistically — when, in reality, it’s only extending its statistical horizon. Without reasoning, memory, or intent, scale alone cannot replace architectural thinking.

True system intelligence requires structured awareness — not longer context windows, but the ability to model relationships, reason over constraints, and preserve design integrity across decisions. Until copilots evolve to that level, they will continue to produce code that looks coherent yet fails in operation.


Why “AI Will Replace Engineers” Is the Wrong Question

Saying that copilots will replace engineers is like saying Excel replaces financial analysts. It doesn’t. It scales their ability to work with data — but the thinking, reasoning, and judgment still belong to the human.

LLMs can write code. They can’t reason about why the code should exist, or how it fits into a larger system.

That’s why the “AI replacing engineers” narrative is misleading. It confuses automation with understanding. The copilots are assistants — not autopilots. And the best engineering teams know this distinction defines success or failure in real deployments.


🔧 The Road Ahead

If LLM copilots are to become meaningful contributors to software engineering, they need a fundamental redesign — not just larger models or faster inference speeds.

The current generation operates within a narrow window: they assist in generating code, but they don’t participate in engineering. They lack the systemic awareness that defines real software creation — architecture, integration, performance, deployment, security, and lifecycle management.

Engineering isn’t linear. It’s an interconnected process where one decision affects many others — from dependency chains and version upgrades to runtime performance, user experience, and security posture. Today’s copilots don’t see those connections; they work line by line, not layer by layer.

They need to evolve from code predictors into contextual collaborators — systems that understand project structure, dependencies, testing, and delivery pipelines holistically. This requires moving beyond language models into engineering models that reason about software as a living ecosystem.

At the same time, the industry must re-examine its direction. The rush to train ever-larger models and flood the market with AI coding tools has become a competition of scale rather than substance. Billions of dollars are being spent chasing leaderboard positions — while the actual developer experience and production readiness remain secondary.

What’s needed now is not more size, but more sense. We need copilots that respect the realities of engineering — grounded in correctness, maintainability, and performance — and that integrate seamlessly into how software is truly built and maintained.

The goal isn’t to automate developers out of the loop. It’s to elevate them — providing insight, structure, and efficiency while preserving human judgment. Only when copilots align with the principles of disciplined software engineering will they deliver real, measurable value — in production, at scale, and over time.

The next generation of copilots must blend reasoning, responsibility, and restraint. They should not just predict the next line of code, but understand why that line matters. They must combine deep contextual learning with lightweight, sustainable compute — an evolution from “Large Language Models” to Lean Engineering Models that prioritize cost, performance, and environmental impact alongside capability.

That’s the real challenge — and the real opportunity — in the road ahead for AI and software engineering.

read more
ArticlesArtificial IntelligenceFeatured

What is AI Content Loop

Think about the last five things you read online.
A headline.
A summary.
A post that made you nod in agreement.

Now pause and ask yourself — was it original thought?
Or just a reflection of a reflection?

We live in a time of instant information. But the truth is, much of what we consume isn’t coming from the source. It’s content that’s been compressed, rephrased, repackaged — and increasingly, created entirely with the help of AI.

This is what I call the AI content loop.


What Is the AI Content Loop?

It starts with an original insight — a 200-page research paper, a detailed report, a rich podcast discussion.

But from that point on, AI steps in — everywhere:

  • An analyst uses AI tools to extract key takeaways.
  • A writer uses AI to turn that into a summary or article.
  • A content creator rephrases it with AI assistance into a post or a script.
  • A marketer uses AI to generate captions, titles, or social media snippets.
  • And finally, AI tools re-summarize it for you — into the bite-sized form you see in your feed.

What began as deep thought is passed through multiple AI filters — compressed, stretched, reworded — until all that’s left is something that sounds smart, but often lacks substance.

And we don’t even realize it’s happening.


Why It Matters

This loop isn’t inherently bad. Summarization and synthesis can be helpful. AI can make content more accessible.

But here’s the danger:
When compression becomes the default, we mistake familiarity for understanding.

We lose:

  • Depth, because layers are stripped away.
  • Context, because nuance doesn’t fit into bullet points.
  • Originality, because the same source is rewritten endlessly.
  • Independent thinking, because we stop tracing ideas back to their origin.

Breaking the Loop

In a world of infinite summaries, the real edge lies in doing something counterintuitive:
Slowing down. Looking deeper. Thinking independently.

Here’s how:

  1. Think Clearly
    Pause before accepting information. Ask: What’s the source? What’s missing?
  2. Ask Precisely
    Look beyond generalizations. Ask better questions. Dig into specifics.
  3. Create with Intent
    Don’t just reword what’s already out there. Add something new. Share what you’ve actually learned.

The Bottom Line

The AI content loop isn’t going away.
If anything, it’s accelerating — with AI touching every part of the content chain.

But you don’t have to stay trapped in it.

Don’t just consume the reflection.
Go deeper. Ask better questions. Think for yourself.
Build your knowledge in depth — not in summaries.

That’s how you stay relevant.
That’s how you move ahead.

Checkout the video for more details

read more
Agentic AIArtificial IntelligenceFeatured

Sustainable Agentic AI – Green AI White paper

Most Agentic AI conversations are missing a key dimension: cost, carbon, and complexity.

While the spotlight is often on autonomy, orchestration, and innovation, the reality is that Agentic AI systems—if not designed intentionally—carry hidden risks that quietly erode value:
❗ Vague goals that trigger unnecessary actions, retries, and compute waste
❗ Over-planning and decision loops that burn resources without meaningful benefit
❗ Overuse of large models when smaller models would suffice
❗ Redundant tool calls and uncontrolled memory growth
❗ Silent system inefficiencies that drive up cloud costs and emissions without notice

As most organizations are experimenting with or just getting started on Agentic AI, this is the right time to embed efficiency, sustainability, and cost-awareness at the foundation—not as an afterthought.

That’s why I wrote Lean and Green Agentic AI—a white paper with a practical framework for building AI that is not only intelligent but also efficient, scalable, and economically viable.

The paper introduces:
✅ The six-stage Agentic AI lifecycle
✅ Lean principles to minimize cost, carbon, and complexity
✅ Practical techniques for energy-efficient models, inference, and carbon-aware execution
✅ A standardized approach to measurement using the Software Carbon Intensity (SCI) framework and its AI extension
📄 Access the full white paper here:
👉 https://github.com/navveenb/lean-agentic-ai/tree/main/research/Sustainable%20Agentic%20AI

The agentic future is coming fast. Let’s ensure it’s smarter, greener, and built to scale responsibly.

read more
Agentic AIArticlesArtificial IntelligenceFeatured

Bringing Order to Content Chaos: How Gemini CLI Elevates Your Command Line Productivity

The command line is the backbone of productivity for many developers, data professionals, and content creators. Yet with growing volumes of files, drafts, and project assets, even the most organized users face content chaos — duplicate files, forgotten revisions, and manual cleanup that eats into creative time.

Enter Gemini CLI: an open-source AI agent that brings Google Gemini’s natural language intelligence right to your terminal. Gemini CLI isn’t just about coding — it’s a smarter way to search, summarize, compare, organize, and automate file and content workflows using plain English.

What Makes Gemini CLI Stand Out?

  •  Direct Access to Gemini 2.5 Pro:
     Instant, lightweight access to a powerful AI model with a generous token context window — ideal for large files and lengthy content.
  •  Natural Language Commands:
     Forget obscure flags or complex scripts. Just ask in everyday language, and Gemini CLI understands your intent.
  •  Productivity Beyond Coding:
     Whether you’re sorting research notes, summarizing docs, or managing creative assets, Gemini CLI adapts to your workflow.

Practical Ways Gemini CLI Boosts Productivity

Here are real-world scenarios where Gemini CLI shines:

  • 🔍 Find and Remove Duplicate Files:
     gemini "Scan this folder for duplicate PDFs and list them"
     gemini "Find images with similar names and flag potential duplicates"
  • 📝 Summarize Key Content:
     gemini "Summarize the main points from all meeting notes in this directory"
     gemini "Extract key differences between Draft_v1.docx and Draft_v2.docx"
  • 🗂️ Organize and Rename Files:
     gemini "Organize all documents by project and year"
     gemini "Batch rename files in this folder using a consistent naming scheme"
  • 🔎 Search by Natural Language:
     gemini "Show all presentations from 2024 with more than 10 slides"
     gemini "List files modified in the last 7 days containing the word ‘proposal’"
  • ⚙️ Automate Repetitive Actions:
     gemini "Move all .txt files older than 6 months to the archive folder"
     gemini "Delete temporary files with 'backup' in their names after review"
  • 🛠️ Content Generation and Debugging:
     gemini "Draft a README.md based on the contents of this project folder"
     gemini "Review this Python script and suggest improvements"

My Experiment: Gemini CLI in Action

To see Gemini CLI in action, I pointed it at one of my project folders — a mix of presentations, reports, and working drafts. With just a few natural language commands, Gemini CLI quickly analyzed the folder, flagged duplicate files, outlined unique documents, and delivered a clear, actionable summary. What would have taken much longer to sort manually was resolved in minutes.

I also tried a creative utility: asking Gemini CLI to take a screenshot of my screen and convert it to JPG. The tool prompted me for the necessary permissions and guided me to grant Terminal access on my Mac. Once enabled, Gemini CLI handled the task seamlessly — showcasing how agent-powered CLI can integrate real-world utility features right into your workflow.

Download the Gemini CLI at – https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/

Important Caveats and Best Practices

  • File Access and Permissions:
     Gemini CLI can access and modify your files. Always check which folders you’re targeting, especially with move or delete commands.
  • Accidental Deletion:
     AI-powered deletion is fast but irreversible. Add confirmation prompts or use a “dry run” before destructive commands.
  • Sensitive Content:
     Avoid processing sensitive files unless you’re clear on how data is handled locally vs. in the cloud (refer to documentation).
  • Versioning and Auditability:
     For important assets, enable file versioning or keep a changelog to track changes made via Gemini CLI.
  • AI Limitations:
    Review AI suggestions, especially for bulk operations. Natural language is powerful — but not perfect.

Final Thoughts: The Future Is Agentic

Gemini CLI brings much-needed order and intelligence to content management in the terminal. By combining natural language with robust AI, it transforms how we interact with files, automate tasks, and create content. For developers, creators, and knowledge workers, it’s a way to reclaim time and reduce manual overhead — when used thoughtfully.

💡 This is just one example of how an integrated agent CLI can make a difference. Looking ahead, it’s clear that future operating systems will be powered by smart agents — completely changing how we interact with files, applications, and information across our digital lives.

read more
Agentic AIArticlesArtificial IntelligenceBooksFeaturedGenerative AI

The New AI Engineering Mindset—Navigating Uncertainty and Opportunity in the Age of Intelligent Machines

We are living through the most transformative era in engineering history. Artificial intelligence—once the domain of research labs and specialized applications—now sits at the core of how systems, products, and organizations are built and operated. For engineers, this brings both exhilaration and deep uncertainty. As intelligent machines automate everything from code review to decision-making, the very foundations of engineering practice are being redefined.

It is natural to feel anxiety or fear as new technologies challenge traditional roles and skills. But focusing only on what may be lost risks overlooking a far greater opportunity: to redefine what it means to engineer in the age of intelligent machines. This is not just about surviving disruption, but about thriving—by developing new ways of thinking, learning, and leading.

This book is your guide to navigating and shaping this new landscape. You’ll discover practical frameworks for thriving amid uncertainty, strategies for rapid learning and upskilling, and a modern mindset for collaborating with AI without losing your edge. You’ll learn how to move beyond basic automation and become an orchestrator—integrating technology, context, and purpose to solve problems that truly matter.

At the heart of this new approach is the concept of the Human Stack—a layered model capturing where engineers create enduring value in the AI era. 

From context engineering and system integration, to oversight, ethics, and vision, the Human Stack highlights the roles where judgment, creativity, and leadership remain irreplaceable. In this book, you’ll see how mastering these layers is essential not just for your relevance, but for the positive impact you can have on your teams, organizations, and the world.

What will you find inside?

  • Step-by-step strategies for adapting to AI-driven change and building future-proof skills
  • Deep dives into the mindset, habits, and collaborative models that define the new engineer
  • Actionable frameworks for orchestrating complex workflows, including the principles of prompt engineering, multi-agent collaboration, and continuous learning
  • A full-length, real-world case study: transforming the Software Development Lifecycle (SDLC) using Agentic AI, including the design and governance of advanced orchestration, integration of protocols like the Model Context Protocol (MCP), and best practices for scaling responsible automation in production
  • Insight into emerging roles, ethical standards, and the opportunities that come with being a technical leader and orchestrator in the AI era

This book goes beyond theory, providing actionable playbooks, architectures, and checklists that you can apply immediately—whether you’re a hands-on engineer, a technical leader, or a strategist guiding your organization’s AI journey.

This is the engineer’s moment of truth. Those who cling to old certainties will watch the future pass them by. But those who embrace uncertainty, see opportunity where others see risk, and learn to orchestrate rather than just automate, will define the next era of progress.

The age of intelligent machines is not a threat—it is the greatest opportunity ever handed to engineers. The path ahead may be uncertain, but it is within this uncertainty that invention—and true leadership—are born.

Having gone through various waves of technology transformation over the past two decades—from my first project on mainframe modernization, where decades-old business logic was translated into new architectures, to now embracing the opportunities and challenges of Gen AI—I’ve witnessed firsthand both the excitement and uncertainty that each new era brings. I see a lot of confusion and anxiety across the engineering community about which skills to develop, what roles to pursue, and how to stay relevant as technology evolves. If I can help clarify this path, provide a practical roadmap, and instill a sense of purpose and confidence, then this book will have achieved its mission.

This book distills those lessons and provides the strategies, mindsets, and examples that will empower you to make your mark—no matter where you are in your journey.

Welcome to your new mindset.

Get your copy of the book at – https://amzn.to/43CnItq

read more
Agentic AIArticlesArtificial IntelligenceFeatured

Autonomous Portfolio Analysis with Google ADK, Zerodha MCP, and LLMs

Modern financial analysis is rapidly moving toward automation and agentic workflows. Integrating large language models (LLMs) with real-time financial data unlocks not just powerful insights but also entirely new ways of interacting with portfolio data.

This post walks through a practical, autonomous solution using Google ADK, Zerodha’s Kite MCP protocol, and an LLM for actionable portfolio analytics. The full workflow and code are available on GitHub.


Why This Stack?

  • Google ADK: Enables LLM agents to interact with live tools, APIs, and event streams in a repeatable, testable way.
  • Zerodha MCP (Model Control Protocol): Provides a secure, real-time API to portfolio holdings using Server-Sent Events (SSE).
  • LLMs (Gemini/GPT-4o): Analyze portfolio data, highlight concentration risk, and offer actionable recommendations.

Architecture Overview

The workflow has three main steps:

  1. User authenticates with Zerodha using an OAuth browser flow.
  2. The agent retrieves live holdings via the MCP get_holdings tool.
  3. The LLM agent analyzes the raw data for risk and performance insights.

All API keys and connection details are managed through environment variables for security and reproducibility.


Key Code Snippets

1. Environment and Dependency Setup

import os
from dotenv import load_dotenv

# Load API keys and config from .env
load_dotenv('.env')
os.environ["GOOGLE_API_KEY"] = os.environ["GOOGLE_API_KEY"]
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = "False"

2. ADK Agent and Toolset Initialization

from google.adk.agents.llm_agent import LlmAgent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, SseServerParams

MCP_SSE_URL = os.environ.get("MCP_SSE_URL", "https://mcp.kite.trade/sse")

toolset = MCPToolset(
    connection_params=SseServerParams(url=MCP_SSE_URL, headers={})
)

root_agent = LlmAgent(
    model='gemini-2.0-flash',
    name='zerodha_portfolio_assistant',
    instruction=(
        "You are an expert Zerodha portfolio assistant. "
        "Use the 'login' tool to authenticate, and the 'get_holdings' tool to fetch stock holdings. "
        "When given portfolio data, analyze for concentration risk and best/worst performers."
    ),
    tools=[toolset]
)

3. Orchestrating the Workflow

from google.adk.sessions import InMemorySessionService
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService
from google.adk.runners import Runner
from google.genai import types

import asyncio

async def run_workflow():
    session_service = InMemorySessionService()
    artifacts_service = InMemoryArtifactService()
    session = await session_service.create_session(
        state={}, app_name='zerodha_portfolio_app', user_id='user1'
    )

    runner = Runner(
        app_name='zerodha_portfolio_app',
        agent=root_agent,
        artifact_service=artifacts_service,
        session_service=session_service,
    )

    # 1. Login Step
    login_query = "Authenticate and provide the login URL for Zerodha."
    content = types.Content(role='user', parts=[types.Part(text=login_query)])
    login_url = None
    async for event in runner.run_async(session_id=session.id, user_id=session.user_id, new_message=content):
        if event.is_final_response():
            import re
            match = re.search(r'(https?://[^\s)]+)', getattr(event.content.parts[0], "text", ""))
            if match:
                login_url = match.group(1)
    if not login_url:
        print("No login URL found. Exiting.")
        return
    print(f"Open this URL in your browser to authenticate:\n{login_url}")
    import webbrowser; webbrowser.open(login_url)
    input("Press Enter after completing login...")

    # 2. Fetch Holdings
    holdings_query = "Show my current stock holdings."
    content = types.Content(role='user', parts=[types.Part(text=holdings_query)])
    holdings_raw = None
    async for event in runner.run_async(session_id=session.id, user_id=session.user_id, new_message=content):
        if event.is_final_response():
            holdings_raw = getattr(event.content.parts[0], "text", None)
    if not holdings_raw:
        print("No holdings data found.")
        return

    # 3. Analysis
    analysis_prompt = f"""
You are a senior portfolio analyst.

Given only the raw stock holdings listed below, do not invent or assume any other holdings.

1. **Concentration Risk**: Identify if a significant percentage of the total portfolio is allocated to a single stock or sector. Quantify the largest exposures, explain why this matters, and suggest specific diversification improvements.

2. **Performance Standouts**: Clearly identify the best and worst performing stocks in the portfolio (by absolute and percentage P&L), and give actionable recommendations.

Raw holdings:

{holdings_raw}

Use only the provided data.
"""
    content = types.Content(role='user', parts=[types.Part(text=analysis_prompt)])
    async for event in runner.run_async(session_id=session.id, user_id=session.user_id, new_message=content):
        if event.is_final_response():
            print("\n=== Portfolio Analysis Report ===\n")
            print(getattr(event.content.parts[0], "text", ""))

asyncio.run(run_workflow())

Security and Environment Configuration

All API keys and MCP endpoints are managed via environment variables or a .env file.
Never hardcode sensitive information in code.

Example .env file:

GOOGLE_API_KEY=your_google_gemini_api_key
MCP_SSE_URL=https://mcp.kite.trade/sse

What This Enables

  • Reproducible automation: Agents can authenticate, retrieve, and analyze portfolios with minimal human input.
  • Extensibility: Easily add more tools (orders, margins, etc.) or more advanced analytic prompts.
  • Separation of concerns: Business logic, security, and agent workflow are all clearly separated.

Repository

Full working code and documentation:
https://github.com/navveenb/agentic-ai-worfklows/tree/main/google-adk-zerodha


This workflow is for educational and portfolio analysis purposes only. Not investment advice.

read more
Agentic AIArtificial IntelligenceFeaturedUncategorized

Comparative Analysis of AI Agentic Frameworks

AI agentic frameworks provide the infrastructure for building autonomous AI agents that can perceive, reason, and act to achieve goals. With the rapid growth of large language models (LLMs), these frameworks extend LLMs with orchestration, planning, memory, and tool-use capabilities​. This blog compares prominent frameworks from a 2025 perspective – including LangChain, Microsoft AutoGen, Semantic Kernel, CrewAI, LlamaIndex AgentWorkflows, Haystack Agents, SmolAgents, PydanticAI, and AgentVerse – across their internal execution models, agent coordination mechanisms, scalability, memory architecture, tool use abstraction, and LLM interoperability. I will also cover emerging frameworks in my next blog (e.g. Atomic Agents, LangGraph, OpenDevin, Flowise, CAMEL) and analyze their design principles, strengths, and limitations relative to existing solutions.

Comparison of Established Agentic Frameworks (2025)

The table below summarizes core characteristics of each major framework.

Table 1. Key Features of Prominent AI Agent Frameworks (2025)

FrameworkExecution ModelAgent CoordinationScalability StrategiesMemory ArchitectureTool Use & PluginsLLM Interoperability
LangChainChain-of-thought sequences (ReAct loops) using prompts. Chains modularly compose LLM calls, memory, and actions.Primarily single-agent, but supports multi-agent interactions via custom chains. No built-in agent-to-agent messaging.Designed for integration rather than distributed compute. Concurrency handled externally.Pluggable Memory modules (short-term context, long-term via vector stores).Abstraction for Tools as functions. Implements ReAct and OpenAI function calling. Rich API/DB connectors.Model-agnostic: supports OpenAI, Azure, HuggingFace, etc.
AutoGen (Microsoft)Event-driven asynchronous agent loop. Agents converse via messages, generating code or actions executed asynchronously.Multi-agent conversation built-in – e.g., AssistantAgent and UserProxyAgent chat to solve tasks.Scalable by design: async messaging for non-blocking execution. Supports distributed networks.Relies on message history for context. Can integrate external memory if needed.Tools and code execution via messages. Easy integration with Python tools and custom functions.Multi-LLM support (OpenAI, Azure, etc.), optimized for Microsoft’s stack.
Semantic KernelPlan-and-execute model using skills (functions) and planners. High-level SDK for embedding AI into apps.Concurrent agents supported via planner/orchestrator. Multi-agent collaboration via shared context.Enterprise-grade scalability: async and parallel calls, integration with cloud infrastructure.Robust Memory system: supports volatile and non-volatile memory stores. Vector memory supported.Plugins (Skills) as first-class tools. Secure function calling for C#/Python functions.Model-flexible: OpenAI, Azure OpenAI, HuggingFace. Multi-language support.
CrewAIRole-based workflow execution. Pre-defined agent roles run in sequence or parallel. Built atop LangChain.Multi-agent teams (“crews”) with structured coordination. Sequential, hierarchical, and parallel pipelines supported.Focuses on orchestrating multiple agents. Enterprise version integrates with cloud for production deployment.Inherits LangChain memory. Context passed through crew steps. Conflict resolution supported.Flexible tool integration per agent role. Open-source version integrates LangChain tools.Any LLM via LangChain: OpenAI, Anthropic, local models supported.
LlamaIndex AgentWorkflowsWorkflow graph execution. Agents (nodes) execute in a graph, handing off state via shared Context.Built for both single and multi-agent orchestration. Supports cyclic workflows and human-in-the-loop.Parallelizable workflows. Checkpointing for intermediate results. Scales to large data volumes.Shared memory context via WorkflowContext. Integration with vector stores.Tools integrated as functions or pre-built tools. Strong retrieval-generation combination.Model-agnostic via LlamaIndex: OpenAI, HF, local LLMs.
Haystack AgentsTool-driven ReAct agents. LLM planner selects tools iteratively until task completion.Primarily single-agent. Can be extended to multi-agent via connected pipelines.Designed for production Q&A. Scalability via batching and pipeline parallelism.Emphasis on retrieval-augmented memory. Uses embedding stores and indexes.Abstracts services as Tools. Modular pipeline design for swapping components.Pluggable LLMs via PromptNode: OpenAI, Azure, Cohere, etc.
SmolAgents (HF)Minimalist ReAct implementation. Agents write/execute code or call structured tools.Single-agent, multi-step. Can run multiple agents in parallel if needed.Lightweight for rapid prototyping. Can embed in larger systems. No built-in distribution.No built-in long-term memory. External vector DBs can be integrated manually.Direct code execution with secure sandbox options. Minimal abstractions.Highly model-flexible: OpenAI, HuggingFace, Anthropic, local models.
PydanticAIStructured agent loop with output validation. Supports async execution. Pythonic control flow.Single-agent by default. Supports multi-agent via delegation and composition.Async & scalable: handles concurrent API calls or tools. Production-grade error handling.Structured state passed via Pydantic models. External stores can be integrated.Tools as Python functions with Pydantic I/O models. Dependency injection supported.Model-agnostic: OpenAI, Anthropic, Cohere, Azure, Vertex AI, etc.
AgentVerse (Fetch.ai)Modular multi-agent environment simulation. Agents register in a decentralized registry.Multi-agent by design. Agents discover each other and collaborate dynamically.Supports large agent populations. Agent Explorer UI for monitoring. Distributed deployment supported.Environment state as shared memory. Agents may also have private memory/state.Tools as environment-specific actions. Emphasizes communication protocols.Model-agnostic. LLM-based agents supported via wrappers.
read more
Agentic AIArticlesArtificial IntelligenceBooksFeatured

Agentic AI: From Strategy to Purposeful Implementation

As we welcome 2025, I’m thrilled to introduce my latest book, which reflects my vision for the future of AI—where systems go beyond automation to adapt dynamically, make informed decisions, and align with purpose and sustainability.

This book address a critical gap: the lack of a structured framework for Agentic AI. In this book, I’ve modeled a framework inspired by human cognition, offering a clear pathway for designing impactful, sustainable, and purpose-driven Agentic AI systems.

What’s Inside?
– Cognitive Frameworks: The 7 foundational layers of Agentic AI.
– Purposeful Strategy: Practical ways to embed ethics and sustainability.
– Practical Implementation: Step-by-step guidance and tools for domain-specific agents.
– 10+ Agentic AI Patterns: Explore reusable patterns for building adaptable, intelligent systems.
– Leadership in AI: Navigate challenges and seize opportunities in intelligent systems.
– Observability and Governance: Ensure transparency, accountability, and continuous improvement.

This book bridges the gap between vision and implementation, equipping leaders, technologists, and policymakers with the tools to create Agentic AI systems that make a meaningful impact.

📚 Now available on Amazon https://amzn.to/420TXC3

Wishing you all a successful and inspiring 2025! 🌟

read more
ArticlesArtificial IntelligenceFeaturedTechnologyViews & Opinions

Top 10 Tech Predictions for 2025

As we enter 2025, technological advancements are set to reshape industries, enhance daily life, and create new ethical considerations. These trends reflect a collective drive for smarter, more efficient, and responsible solutions. Here are my Top 10 Tech Trends that will define the year ahead.

1. AI Everywhere: Pervasive Integration of Artificial Intelligence

Artificial Intelligence is moving beyond specialized use cases to become an integral part of daily operations. AI will enhance decision-making, optimize business processes, and power smarter devices. From AI-driven enterprise solutions to consumer technology, AI will be embedded in nearly every sector, making interactions more seamless and intelligent.

2. Cybersecurity for the AI Era

As AI adoption accelerates, so will cyber threats leveraging AI’s capabilities. Cybersecurity measures will need to become more adaptive, sophisticated, and AI-powered to counter these threats. Expect AI-driven security systems that can predict, prevent, and mitigate attacks in real time, ensuring robust digital defense strategies.

3. Quantum Computing Breakthroughs

Quantum computing will edge closer to practical applications, offering the ability to solve problems previously deemed unsolvable by traditional computing. Industries like logistics, pharmaceuticals, and finance will benefit from these advancements, enabling more efficient computations, simulations, and optimizations at unprecedented speeds.

4. AI-Powered Agents Transforming Interactions

AI agents will evolve into advanced, autonomous assistants capable of handling complex tasks independently. These agents will streamline workflows, enhance productivity, and improve customer experiences by understanding context, learning user preferences, and automating routine processes across various platforms.

5. The Rise of Autonomous Vehicles

Self-driving technology will continue to progress, moving closer to mainstream adoption. Enhanced safety, reduced traffic congestion, and more efficient logistics networks will be driven by advancements in autonomous vehicles. Urban mobility and transportation industries will undergo significant transformations, making travel safer and more efficient.

6. Ethical AI and Compliance-Driven Development

As AI becomes more powerful, the emphasis on ethical and transparent AI development will increase. Organizations will prioritize fairness, accountability, and compliance with evolving regulations. Ethical AI practices will focus on reducing biases, ensuring transparency, and fostering trust in AI systems, addressing both societal and organizational concerns.

7. Neurological Enhancements Through Technology

Technological advancements in neuroscience will pave the way for brain-computer interfaces and cognitive enhancements. These innovations will improve communication for individuals with disabilities and offer cognitive boosts for educational and professional applications. The boundary between technology and human potential will continue to blur.

8. Battling Disinformation with Advanced Security

As misinformation grows more sophisticated, technologies for detecting and combating disinformation will become critical. AI-driven tools will help verify authenticity, analyze information patterns, and protect against the spread of false narratives. Ensuring the integrity of information will be essential for public trust and organizational credibility.

9. Energy-Efficient Computing for a Sustainable Future

With increasing environmental concerns, energy-efficient computing will become a priority. Innovations in hardware and software will aim to reduce the energy consumption of data centers, devices, and cloud infrastructure. This trend will balance technological growth with sustainability goals, minimizing the carbon footprint of digital operations.

10. AI Governance Platforms for Responsible Innovation

To manage the rapid deployment of AI, governance platforms will play a crucial role in ensuring responsible and ethical use. These platforms will help organizations track compliance, manage risks, and enforce transparency in AI systems. By providing frameworks for responsible AI, they will mitigate ethical challenges and promote sustainable innovation.

A Focused Path Forward

As technology advances in 2025, organizations must prioritize sustainability, ethical AI, and energy efficiency. By minimizing environmental impact, ensuring responsible AI use, and adopting energy-efficient practices, businesses can drive innovation that supports both human progress and planetary well-being. The future of technology lies in balancing advancement with responsibility, shaping a world that is smarter, safer, and more sustainable.

This is my last article for the year. Thank you to all the readers for engaging with this newsletter and sharing your valuable feedback. Wishing you all a joyful holiday season and a fantastic new year!

Leaving on a light note – Santa might just be using drones to deliver your presents this year! Here’s a fun video to enjoy – https://www.youtube.com/watch?v=obQUtuN24wQ

read more
1 2 3 6
Page 1 of 6