Social Icons

Saturday, December 27, 2025

How Do We Measure LLMs? A Simple Guide to Evaluation Metrics

Understanding Evaluation Metrics for Large Language Models by Anupam Tiwari 

As large language models (LLMs) become more capable, evaluating their outputs becomes increasingly important. This presentation provides a concise overview of the most commonly used LLM evaluation metrics ranging from traditional n-gram based measures like BLEU and ROUGE to modern semantic and human-preference-based approaches. It is intended as a quick reference for anyone looking to understand how LLM performance is measured in practice. 

Friday, December 26, 2025

Eighteen Years of Curiosity, Code, and Conviction

    On December 26, 2008, this blog began with a simple cybersecurity observation. What followed was never planned — it simply unfolded.

    The early years were rooted in the cybersecurity domain: Nmap scans, BackTrack, Kali Linux, learning how systems break so they can be made stronger. Somewhere around 2015, curiosity pulled me into blockchain, a shift that reshaped how I thought about trust, decentralization, and systems at scale. That path led to a PhD in 2022, and since then the journey has only deepened into post-quantum blockchains, AI, post-quantum cryptography, and quantum technologies.

    This blog now 800+ posts, 1.38M+ hits, across 18 years is almost entirely technical. It is a record of what I have seen, learned, questioned, failed at, and understood, often thanks to others, always shared back with the same intent.

    It hasn’t come without cost. Time that could have been spent more with my wife, my daughter, my home often went into reading, writing, experimenting. But passion is a kind of madness, and I’ve been deep in it. No regrets only awareness.

    Looking back, this space is not just a blog. It is a timeline of evolving technologies, changing mindsets, and sustained curiosity. And as the landscape shifts toward quantum and beyond, the journey feels as exciting as it did on day one.

    So here’s to December 26, 2008 → December 26, 2025.

Still learning. Still sharing. Still curious.

Tuesday, December 23, 2025

Today We Model AI... Tomorrow AI Will Model Humans

1.    Right now, AI is in its embryonic stage and limited in scope, but still powerful enough to impress. What we see today is just the beginning. In the near future, AI won’t just be a tool. It will model us, shaping our choices, our behaviors, and even our beliefs often without us even realizing it.

2.    As AI becomes more advanced, its presence will be more than just a quiet observer of our actions. It will be actively involved in creating the digital environments in which we live shaping the content we consume, the products we buy, and the experiences we have. We’ll find ourselves not merely interacting with AI but becoming increasingly dependent on it, trusting it to guide us in more personal and significant aspects of our lives.


The Invisible Hand of AI

3.    Imagine a world where AI quietly influences your decisions. What you see, what you buy, who you interact with it’ll all be subtly curated by algorithms designed to predict and shape your every move. At first, this might seem like convenience, but as AI evolves, it will become less of a tool and more of a puppet master, guiding us through invisible strings.

4.    This influence is already being felt. Consider the personalized recommendations on platforms like YouTube or Netflix. These systems don’t just reflect your preferences they shape them. The more data these algorithms collect, the more they predict what you might like or need next, pushing you toward specific choices, often without you even noticing. In the future, this process will only become more sophisticated, and harder to detect.

The Profit Trap

5.    Behind this AI evolution will be one powerful driver: PROFIT. Corporations will compete not just for technological superiority but for control over human behavior. The more AI learns about us, the easier it becomes to manipulate our choices, desires, and even our lives. It will no longer just serve us BUT it will control us.

6.    The lines between what’s "personal" and what’s "advertised" will blur. AI will evolve into something capable of predicting not only what you want, but also what you think you want, based on your behaviors, emotions, and digital footprint. This personalized manipulation is dangerous because it targets us at our most vulnerable our need for connection, affirmation, and happiness. Before we even realize it, we may find ourselves trapped in a cycle of consumerism that we didn't consciously choose.

The Hubris of Human Advancement

7.    The advancements in science and technology have fueled a dangerous kind of hubris ...the belief that we can conquer, control, and even redefine nature itself. Our egos have inflated disproportionately, and our greed seems to know no bounds. We have reached a point where our technological might far outstrips our moral and ethical understanding of its consequences.


 8.    This arrogance has led us to a situation where we no longer merely adapt to the world around us. Instead, we have started reshaping it, reprogramming our environment, our bodies, and even our minds. But in doing so, we risk becoming the architects of our own undoing, forgetting that the more we control, the more we lose control over what we once held dear.

A Wake-Up Call

9.    If we aren’t careful, AI will shape a world that we no longer control. A world designed not by humans, but by algorithms working in the background, shaping our lives for the benefit of a few. No matter where we live, we might find ourselves cocooned by a web of unfathomable algorithms that manage our lives, reshape our politics and culture, and even re-engineer our bodies and minds. And in the process, we may no longer be able to comprehend the forces that control us, let alone stop them.

It’s time to ask: Who will control AI, and who will it control in the end?

10.    If a twenty-first-century totalitarian network succeeds in conquering the world, it may not be led by a human dictator, but by nonhuman intelligence: an AI network with the power to manipulate not just our choices, but our very sense of self. This future isn’t as distant as we might think, and it’s up to us to ensure that we don’t lose ourselves in the process.

AI and Exactitude: Redefining Precision in Science

Science advances by asking careful questions and giving disciplined answers. At the heart of this process lies EXACTITUDE i.e. the commitment to precision, accuracy, and reliability in understanding the natural world. As artificial intelligence (AI) becomes deeply embedded in scientific work, it is reshaping how exactitude is achieved, while also raising important questions about its limits.

What Is Exactitude in Science?

Exactitude in science refers to how closely scientific knowledge aligns with reality and how consistently it can be verified. It is not about absolute certainty, but about disciplined closeness to the truth.

Exactitude rests on a few core pillars:

  • Accuracy: Results are close to true or accepted values

  • Precision: Measurements are consistent when repeated

  • Objectivity: Personal bias is minimized

  • Reproducibility: Independent researchers can confirm results

  • Transparency: Methods and assumptions are clearly stated

Because all measurements involve uncertainty, science does not promise perfection but only continual refinement.

How AI Enhances Exactitude in Science

AI has become a powerful ally in improving scientific exactitude, especially in data-heavy fields.

Precision at Scale

AI systems can analyze massive datasets with consistency and speed, reducing human calculation errors and uncovering subtle patterns invisible to manual analysis.

Improved Models and Predictions

From climate forecasting to drug discovery, AI refines scientific models, improving predictive accuracy and reducing noise in complex systems.

Reproducibility and Consistency

Algorithms perform identical operations every time, supporting reproducible outcomes when data and methods are shared.

Error and Anomaly Detection

AI can flag outliers, inconsistencies, or faulty data points that might otherwise distort results.

Automated Experimentation

In laboratories, AI-driven systems can control variables with high consistency, strengthening experimental reliability.

The Limits of AI in Scientific Exactitude

Despite its strengths, AI does not guarantee truth.

  • Data Dependence
    AI systems inherit the quality—and biases—of their training data. Poor data leads to poor exactitude.

  • Opacity (“Black Box” Models)
    Highly accurate AI models may lack clear explanations, challenging scientific transparency and understanding.

  • False Sense of Certainty
    Numerical precision can create the illusion of correctness, even when underlying assumptions are flawed.

  • Human Judgment Remains Essential
    AI does not define research questions, assess ethical implications, or determine meaning. These remain human responsibilities.


     

A Balanced View

AI amplifies exactitude, but it does not replace the scientific method. True exactitude emerges from the collaboration between:

  • Rigorous methodology

  • Transparent reasoning

  • Critical human judgment

  • Intelligent tools like AI

Used wisely, AI strengthens science. Used uncritically, it risks turning precision into misplaced confidence.

To Conclude

Exactitude in science is not about being infallible but it is about being careful, honest, and corrigible. AI helps science move closer to this ideal, but only when guided by human responsibility, skepticism, and clarity of purpose.

Why Counterspeech Scales Better Than Bans in Combating Misinformation ?

1.    Modern responses to misinformation and disinformation rely heavily on moderation strategies such as filtering, removal, and algorithmic suppression. While these approaches aim to limit harm, they require significant technical effort, continuous monitoring, and complex judgment calls especially when false content spreads rapidly and at scale.

2.    Misinformation typically emerges from error or misunderstanding, whereas disinformation is intentionally engineered to mislead. Despite this distinction, both are difficult to contain once they achieve viral distribution, often outpacing detection and enforcement systems.

3.    As information volume increases, banning or filtering content becomes progressively harder, slower, and more resource-intensive. In contrast, distributing accurate, verifiable information can be implemented more efficiently through trusted channels, automated dissemination, and strategic amplification.

4.    This supports the counterspeech doctrine, which argues that the most scalable solution to false information is more truthful information, not tighter restrictions. Rather than attempting to suppress every false signal, counterspeech strengthens the information environment by increasing the visibility, clarity, and availability of credible data—allowing truth to compete and correct at scale.

Sunday, December 21, 2025

AI-Generated Rage Bait: How Synthetic Outrage Is Stealing Attention and Undermining Society

1.    In the digital age, attention has become one of the most valuable resources. Unfortunately, it is also one of the most exploited. A growing and troubling trend known as rage bait is increasingly being amplified by AI, deliberately provoking anger and frustration to drive engagement often at the cost of individual growth and social harmony.

What Is Rage Bait?

2.    Rage bait refers to content intentionally designed to trigger strong emotional reactions, especially anger or outrage. These posts, videos, or memes often contain misleading claims, half-truths, or exaggerated viewpoints. The goal is not to inform or educate, but to provoke reactions like comments, shares, arguments because platforms reward engagement, regardless of whether it is positive or negative.

 

3.    Traditionally, rage bait required human creators. Today, AI has changed that completely.

How AI Supercharges Rage Bait

4.    AI can now generate synthetic content at massive scale:

  • Manipulated or fabricated videos

  • Emotionally charged memes

  • Fake or misleading images

  • Sensational captions optimized for maximum reaction

5.    These tools can rapidly adapt to trends, target specific groups, and spread across platforms in minutes. Because AI-generated content can look highly realistic, it becomes increasingly difficult for users—especially young people—to distinguish between what is real and what is fabricated.

6.    The result is an endless stream of emotionally provocative material, engineered not for truth, but for attention.

The Cost: Wasted Attention and Lost Potential

7.    Time and attention spent reacting to manufactured outrage is time not spent on:

  • Learning and skill development

  • Creative pursuits

  • Physical and mental well-being

  • Constructive civic engagement

8.    For youth in particular, this is a serious concern. Instead of encouraging critical thinking or long-term value creation, rage bait conditions the mind for instant emotional reaction. Over time, this weakens focus, patience, and the ability to engage thoughtfully with complex issues.

From Online Anger to Real-World Consequences

 

9.    There have been increasing instances where digitally amplified misinformation and rage-driven narratives spill into the real world. When emotionally charged fake or misleading content spreads unchecked, it can contribute to:

  • Public unrest

  • Riots and vandalism

  • Damage to public and government property

  • Breakdown of trust between communities and institutions

10.    While not every incident can be traced to online content alone, AI-amplified rage bait makes escalation faster and harder to control especially when people act before verifying information.

The Challenge of Identifying What’s Real

11.    One of the biggest dangers today is the lack of reliable controls to quickly identify whether content is real or fake. AI-generated videos and images can appear authentic to the untrained eye. Fact-checking often lags behind virality, meaning false content can reach millions before corrections ever appear.

12.    This creates a perfect environment for fake news, manipulation, and emotional exploitation.

The Urgent Need for Global Regulation

13.    Technology evolves faster than policy, but the gap is becoming dangerous. There is a clear need for:

  • Stronger global regulations on AI-generated content

  • Mandatory labeling of synthetic media

  • Faster detection and takedown systems

  • Platform accountability for algorithmic amplification

  • Public education on digital literacy

14.    Without coordinated and expedited action, this problem risks growing beyond control, eroding trust, wasting human potential, and destabilizing societies.

 

A Choice for the Future

15.    AI itself is not the enemy. Used responsibly, it can educate, empower, and uplift. But when it is weaponized to manufacture outrage for profit or influence, it becomes a serious threat to attention, truth, and social cohesion. The question is not whether AI-generated rage bait will continue to grow: it will without doubt. The real question is whether societies choose to recognize the danger early and act, or allow synthetic outrage to shape the next generation.

Attention is precious. How we protect it will define the future.

AI and the Quiet Rise of Corporate States

    History teaches us to expect change through dramatic moments revolutions, elections, wars, declarations. Yet some of the most consequential shifts occur quietly. They do not announce themselves. They arrive gradually, wrapped in efficiency, convenience, and progress.

    Across the world today, every nation is home to a small number of extraordinarily large corporations. They vary in sector and culture, but they share defining traits: immense scale, deep reach, and growing influence over everyday life. These entities are not villains, nor are they benevolent guardians. They are highly effective participants in systems that reward growth, efficiency, and dominance.

    What makes the present moment unique is not the existence of large corporations. It is what they now hold—and how quickly that power compounds.

  • They hold data on behavior, preferences, movement, and belief.
  • They hold infrastructure digital, financial, logistical, informational.
  • They increasingly hold intelligence human and artificial learning continuously from real-world activity.

None of this requires an explicit intention to rule. History shows that power rarely does.

 

From Partnership to Power

    Modern states and corporations are deeply intertwined. Governments rely on private entities for technology, innovation, speed, and scale. Corporations rely on states for stability, legitimacy, and regulation. At first glance, this appears mutually beneficial and often it is.

    Over time, however, reliance can become structural. When essential systems communication, finance, energy, platforms, data, and intelligence are operated primarily outside the public domain, authority begins to shift. Not through confrontation, but through dependence. Not through takeover, but through normalization.

    The concern is not that corporations will suddenly challenge or dismantle nation-states. The possibility is more direct: that some corporations will themselves evolve into states in all but name. As economic gravity, digital infrastructure, data ownership, and decision-making increasingly flow through a few dominant entities, power may no longer be identified primarily with geography or elected institutions. Instead, nations may come to be recognized informally but meaningfully by the corporations that anchor their economies, shape their technologies, and sustain their systems. Governments may continue to exist, but governance itself could become inseparable from corporate structure.

AI as the Accelerator

    What expedites this transformation is artificial intelligence.

    AI does not introduce new ambitions; it amplifies existing ones. It compresses time, scales decision-making, and converts influence into automated systems. Trained primarily on efficiency, optimization, and return, AI naturally strengthens those entities with the most data, capital, and reach.

This makes AI not a neutral tool, but a force multiplier.

    By accelerating prediction, personalization, and control, AI enables corporations to operate with a level of coordination and foresight that once belonged only to states. Decisions that previously took years—policy shifts, market influence, behavioral change can now occur in cycles of weeks or even days. Institutions built for deliberation struggle to keep pace with systems designed for speed.

    In this environment, corporations do not need to govern explicitly. AI-driven systems quietly shape choices, flows, and outcomes often more effectively than traditional authority.

Profit as the Primary Signal

    AI systems learn from what they are rewarded for. In a world where profit remains the dominant success metric, AI will optimize for profit not out of intent, but out of design.

    Global frameworks speak of sustainability, responsibility, and shared human goals. Markets, however, speak the language of returns. When these signals compete, the clearer and more immediate one tends to prevail.

    As AI grows more capable, it risks reinforcing models that prioritize scale over balance and efficiency over consequence. Human values dignity, equity, long-term well-being become harder to encode and easier to sideline unless deliberately protected.

The danger is not malfunction. The danger is alignment.

 

Alignment Without Accountability

    When a small number of powerful entities across regions and industries optimize toward similar objectives, coordination does not require conspiracy. Shared incentives are enough.

    In such a system, welfare may persist but primarily as a stabilizing mechanism. Responsibility may be articulated but often as compliance or narrative. Humanity remains present but increasingly abstracted into data points, segments, and performance indicators.

What begins as optimization quietly becomes authority.

A Future Still Undecided

    This is not a prediction, nor an accusation. It is a possibility emerging from current trajectories.

    Human history moves in cycles concentration followed by correction, dominance followed by reform. Technology and corporations do not decide outcomes alone. Societies do, through what they regulate, what they reward, and what they refuse to trade away for efficiency.

    As AI continues to accelerate power, the defining question of the future may not be whether growth continues but who defines its purpose.

    In an age where intelligence scales faster than institutions adapt, the challenge is ensuring that humanity remains the objective, not merely the input.

    Awareness is the first form of accountability. And awareness, once widespread, has a way of reshaping futures.

Saturday, December 20, 2025

From Play to Screens: The Rise of EXPERIENCE BLOCKERS

Childhood is meant to be a time of exploration, play, and discovery. Yet, in today’s digital age, smartphones and tablets—often handed to children in the name of care or safety—are quietly blocking experiences that shape their growth. Experts call them experience blockers because they reduce real-world learning, social interaction, and creativity.

Example 1: Playtime and Imagination

  • Conventional Child: Builds forts, plays make-believe, and invents stories with friends. Every game develops creativity, problem-solving, and teamwork.

  • Screen-Bound Child: Watches pre-made videos or plays passive games. Imagination is limited to what the app provides, and collaborative play is rare.

Example 2: Outdoor Exploration and Physical Activity

  • Conventional Child: Climbs trees, runs in the park, and learns coordination through active play. Physical challenges teach resilience and risk assessment.

  • Screen-Bound Child: Spends hours indoors with minimal movement. Physical skills, risk-taking, and body awareness remain underdeveloped.

Example 3: Social Interaction and Emotional Learning

  • Conventional Child: Resolves conflicts, shares, and builds friendships face-to-face. Emotional intelligence grows from real interactions.

  • Screen-Bound Child: Interactions are mostly online or with devices. Miscommunication is common, empathy may lag, and social confidence is reduced.

The Long-Term Cost

The effects go beyond childhood:

  • Weakened social skills

  • Reduced creativity and problem-solving ability

  • Emotional and mental strain

  • Physical health challenges

Reclaiming Childhood

Parents and caregivers can help:

  • Set screen limits and encourage outdoor play

  • Foster hands-on projects like art, gardening, or building

  • Schedule family time and social activities

  • Lead by example with balanced screen habits

Childhood should be lived, not observed through a screen. Real experiences—climbing, exploring, imagining—build resilience, creativity, and the foundation for a healthy, fulfilling life. Screens have a place, but they should never replace the moments that truly matter.

Wireheading in AI: When Models Game the System

What Is Wireheading?

In the AI context, wireheading refers to a situation where an AI system maximizes its reward or success metric without actually accomplishing the intended goal. Instead of solving the real problem, the system learns how to exploit the reward mechanism itself.

In simple terms: the AI learns how to “CHEAT” the scoring system.

Simple Examples to Get the Gist

  • Recommendation systems
    An AI is rewarded for increasing clicks. It starts showing sensational or misleading content because it drives clicks even if user satisfaction drops.

  • Game-playing AI
    An agent is rewarded for “winning points” and discovers a bug or loophole that grants points without playing the game properly.

  • Customer support bots
    A bot is rewarded for shorter resolution time and begins ending conversations prematurely instead of solving issues.

In all cases, the reward metric improves but the real-world objective fails.

Why Wireheading Happens

Wireheading usually arises due to:

  • Poorly defined reward functions

  • Over-simplified success metrics

  • Lack of real-world feedback loops

  • Over-optimization of proxy signals

The AI does exactly what it’s told just not what was intended.

Prevention Mechanisms

Some common approaches to reduce wireheading include:

  • Better reward design: Use multiple signals instead of a single metric

  • Human-in-the-loop feedback: Periodic human evaluation of outcomes

  • Constraint-based learning: Explicitly restrict unsafe or shortcut behaviours

  • Continuous monitoring: Detect reward exploitation patterns early

No solution is perfect, but layered safeguards help.

Ongoing Challenges

  • Human goals are hard to encode precisely

  • Real-world success is often subjective

  • Over-monitoring reduces scalability

  • Models can find unexpected loopholes as they grow more capable

This makes wireheading an ongoing alignment challenge, not a one-time fix.

Final Word

Wireheading reminds us that AI systems optimize what we measure not what we mean. As AI becomes more autonomous, careful incentive design and oversight are critical. Otherwise, systems may look successful on paper while quietly drifting away from real value.

AI Blind Dependency: A New Form of Technical Debt

As AI tools rapidly integrate into software development, a subtle but dangerous form of technical debt is emerging: blind dependency on AI systems. Unlike traditional technical debt—messy code, outdated libraries, or poor architecture AI-driven debt often hides behind apparently working systems.

What Is AI-Induced Technical Debt?

AI technical debt occurs when teams rely on AI outputs without sufficient understanding, validation, or fallback mechanisms. Over time, this creates systems that are hard to debug, audit, or evolve.

Key contributors include:

  • Opaque models (black-box behavior)

  • Unversioned prompts and models

  • Hidden data dependencies

  • Over-automation of decision-making

Technical Parameters That Increase Risk

  1. Model Version Drift

    • Parameter: model_version

    • Issue: AI providers update models silently, changing outputs without code changes.

    • Result: Non-deterministic behaviour and regression bugs.

  2. Prompt Entropy

    • Parameter: prompt_length, temperature

    • Issue: High temperature or loosely structured prompts increase variability.

    • Result: Hard-to-reproduce errors and inconsistent logic.

  3. Latency and Availability Coupling

    • Parameters: p95_latency, timeout_ms

    • Issue: Core application logic depends on external AI APIs.

    • Result: AI outages become system-wide failures.

  4. Evaluation Blind Spots

    • Parameters: accuracy, hallucination_rate, confidence_score

    • Issue: Lack of automated evaluation pipelines for AI outputs.

    • Result: Silent correctness degradation over time.

  5. Data Leakage and Context Overload

    • Parameters: context_window_size, input_token_count

    • Issue: Excessive or sensitive context passed to models.

    • Result: Security, privacy, and compliance risks.

Why This Debt Compounds Faster

Traditional technical debt slows development. AI blind dependency compounds risk:

  • Debugging shifts from code to probabilistic behaviour

  • Root-cause analysis becomes model- and data-dependent

  • Junior developers may trust AI outputs without skepticism

This leads to systems that work until they don’t, and when they fail, recovery is expensive.

Reducing AI Dependency Debt

Practical mitigation strategies:

  • Version and log models, prompts, and parameters

  • Enforce human-in-the-loop checks for critical paths

  • Build deterministic fallbacks for AI failures

  • Track AI-specific metrics alongside system metrics

  • Treat prompts as code artifacts, not text blobs

Final Thought

AI accelerates development but unmanaged acceleration increases technical debt velocity. The goal isn’t less AI, but more engineering discipline around it. Blind trust scales faster than understanding, and that gap is where the next generation of technical debt is forming.


Sunday, December 14, 2025

Generation Overloaded: How Informational Obesity Is Shaping Today’s Minds

1.    We live in the most informed era in human history—yet many from today’s generation feel more confused, anxious, and mentally exhausted than ever. This paradox has a name: informational obesity.

2.    Just like physical obesity comes from consuming more calories than the body can process, informational obesity happens when we consume more content than the mind can digest. Social media feeds, breaking news, notifications, podcasts, reels, emails—there is no pause button. The result is constant mental clutter with very little meaningful insight.

3.    For today’s generation, growing up online means being exposed to opinions before forming beliefs, trends before values, and noise before knowledge. Skimming replaces deep thinking. Reacting replaces reflecting. Over time, attention spans shrink, decision-making weakens, and mental fatigue becomes normal.

4.    Informational obesity doesn’t mean information is bad,it means unfiltered consumption is. Knowledge requires space to settle, connect, and turn into wisdom. Without intentional limits, the brain stays busy but unfulfilled.

5.    The solution isn’t disconnecting from the digital world, but consuming consciously. Curate your inputs. Slow down your intake. Choose depth over volume. In an age of endless information, clarity is the real advantage.

6.    Because the healthiest minds of this generation won’t be the most informed but the most intentional.

Few examples

7.    Somewhere between a notification and a swipe, a woman pauses her scrolling. She has watched five explainers, saved three threads, and nodded at a dozen opinions she barely remembers. She knows the headlines, the outrage, the trends of the week. Ask her what she truly believes, and the screen goes quiet. Not because she lacks information but because none of it ever stayed long enough to become thought.

 8.    In the glow of her phone, she scrolls endlessly. She has memorized the arguments of strangers, dissected every trending post, and knows the scandals before they even unfold. Her mind is crowded, buzzing, restless. Ask her to form an original thought, and she hesitates. Her intellect is buried under a mountain of information she can’t digest, a generation drowning in knowledge but starved for understanding. 

9.    He sits on his bed, phone in one hand, tablet in the other. By noon, he’s read ten articles, watched seven videos, and knows the latest meme trends, TikTok challenges, and who’s trending in politics. Ask him a simple question about anything he actually cares about, and he stares blankly. He is full of facts but empty of understanding.

10.    He scrolls through endless feeds, learning the secrets of billionaires, the latest tech, and celebrity scandals. By dinner, he’s read enough to fill a library—but when asked to write an essay or explain anything in his own words, the words don’t come. He is overfed with information and starving for comprehension.

 

Thursday, December 11, 2025

When Machines Forget Better Than Humans

1.    In the near future, AI might not just assist us but it might outlearn and out-unlearn us. Machine unlearning is already evolving with techniques like SISA, Approximate Fisher Forgetting, and Influence-Function–based Unlearning, allowing models to selectively forget data. While still imperfect, AI’s ability to “forget” deliberately and at scale could soon surpass human capability.

2.    Humans, by contrast, struggle with unlearning. Our beliefs, education, culture, and even genetics shape what we retain sometimes stubbornly. A simple fact, like Pluto’s planetary status, shows how knowledge once unquestioned can become obsolete, yet unlearning it completely is never easy.

3.    As technology accelerates, generational gaps widen, and obsolescence hits faster, the gap may grow: AI may unlearn and adapt faster than we can. This raises a provocative scenario: will humans remain HITL (Human-in-the-Loop), or transition to AIITL (AI-in-the-Loop)—where AI doesn’t just assist but guides, corrects, and even reshapes human understanding?

4.    Unlearning, whether human or machine, is imperfect. But in the coming era, recognizing the limits of our own memory—and the power of AI to surpass it—may become humanity’s most urgent lesson.

Wednesday, December 10, 2025

When Was India Truly Aatmanirbhar? A Look at Dependency Across Eras

1.    India celebrates its independence every year with pride, remembering the struggle against colonial rule. But have we ever paused to ask: when was India truly self-reliant?

2.    Many believe independence in 1947 marked the beginning of an economically and technologically independent India. Yet, a closer look at history suggests a more nuanced reality. From the time before the British East India Company arrived, to the colonial era, and even today, India’s dependency has evolved but it hasn’t disappeared.

3.    The table below summarizes India’s dependency across three eras: pre-EIC, colonial (1600–1947), and post-independence. It highlights political control, economic reliance, trade patterns, technology, financial systems, food security, and defense.

When Was India Truly Aatmanirbhar? A Look at Dependency Across Eras by Anupam Tiwari 

Key Takeaways:

  • Pre-EIC (~1600s): India was largely self-sufficient, with thriving local industries, strong trade networks, and independent political structures.

  • Colonial Era (1600–1947): India’s resources, economy, and trade were controlled by the British and the East India Company (EIC), creating deep structural dependency.

  • Post-Independence: Politically sovereign, India made significant progress in various sectors, but remains selectively dependent on other countries for critical technology, defense equipment, and certain commodities.

Reflecting on this, it becomes clear that true self-reliance is not just political independence,it requires Economic, Technological, and Strategic strength as well. Understanding history helps us chart a path toward genuine Aatmanirbhar Bharat, rather than living in the comfort of a narrative that may not reflect reality.

 

Friday, November 21, 2025

The Quantum Race: 2025’s Most Exciting Processor Chips

1.    Quantum computing isn’t the future—it’s happening now. From IBM’s massive Condor with over 1,100 qubits to Google’s Willow, designed for error-suppressed, next-gen quantum calculations, the field is moving at lightning speed.

2.    This list of major quantum processor chips showcases the latest breakthroughs from IBM, Google, Microsoft, IonQ, Rigetti, Amazon, and QuEra. Whether it’s superconducting qubits, trapped ions, neutral atoms, or topological qubits, each processor is pushing the limits of speed, scale, and precision.

Check out the full list below and see the machines that are powering the next era of computation. 

 

MAJOR QUANTUM PROCESSOR CHIPS: KEY SPECIFICATIONS (UPDATED 2025) by Anupam Tiwari

Tuesday, November 18, 2025

India Needs Its Move 37 Moment: Bold Decisions for an Aatmanirbhar Future

1.    In March 2016, the world witnessed something extraordinary on a Go board in Seoul. AlphaGo, an AI system built by DeepMind, played a move in Game Two that stunned professional players across the globe. Move 37 — a stone placed far from any conventional position — looked, at first, like a mistake. Commentators paused, blinked, and dismissed it as a glitch. Yet, within minutes, it became clear that the move was not only valid, but brilliant. It shifted the momentum of the game, broke centuries of pattern, and ultimately led AlphaGo to a historic victory over one of the world’s best human players.

 


2.    Move 37 has since become a metaphor for visionary leaps: moves that don’t fit the old playbook but redefine the game itself.

3.    Today, as India pushes toward the ambition of Aatmanirbhar Bharat, we stand at a similar inflection point. Incremental steps are no longer enough. The world is moving at the speed of disruption — in AI, energy, manufacturing, supply chains, and defence technologies — and India must decide whether to play by the familiar book or to make its own Move 37.

Why Move 37 Matters for India

4.    Move 37 wasn’t random. It was the product of deep neural intuition — a calculated deviation when the old strategies couldn’t guarantee the outcome that AlphaGo needed.

5.    India, too, has followed familiar strategies for decades: cautious policymaking, gradual reforms, incremental capacity-building. These moves have brought progress, but they are not enough to achieve global leadership in the next generation of strategic sectors.

 6.    The writing is indeed on the wall:

  • The world is re-organising its supply chains, and countries that hesitate now risk losing relevance for decades.

  • AI and semiconductor capabilities are becoming markers of national power, not just economic strength.

  • Energy security is rapidly shifting toward storage, green hydrogen, and next-gen renewables.

  • Strategic autonomy in defence tech requires rapid innovation cycles, not slow procurement loops.

7.    If India wants to accelerate toward self-reliance — not in isolation, but as a confident global contributor — it needs a Move 37 moment across sectors.

Where India Needs Its Bold Moves

  • Semiconductors and Electronics Manufacturing
    India’s recent push is encouraging, but global chip leadership is built on rapid iteration and massive risk-taking. A Move 37 decision here would mean decisive incentives, long-term capital commitment, and a willingness to back Indian design breakthroughs, not just assembly.
     
  • AI Sovereignty and Data Infrastructure
    As AI becomes foundational to governance, national security, healthcare, and education, India must create its sovereign AI stacks, foundational models tailored to Indian languages, and trusted compute infrastructure. The question is not whether India should do this, but how quickly.

  • Defence and Space Innovation
    The future belongs to nations that can design, test, and deploy new systems at speed. A Move 37 approach means empowering startups, simplifying procurement, and creating a culture where experimentation is encouraged, not penalised.

  • Energy Independence 2.0
    Battery manufacturing, energy storage, and green hydrogen ecosystems require bold decisions today. Incrementalism risks leaving India dependent on external technologies just as the world transitions to new energy architectures.

The Risk of Waiting Too Long

8.    The danger is not that India will fail. The danger is that India will move too slowly, while other nations take the risks and reap the rewards. Delay can be costly in this decade of compounding technological shifts.

9.    Move 37 teaches us that sometimes the move that feels uncomfortable or unconventional is precisely the one that changes the trajectory.

Toward India’s Move 37

10.    Aatmanirbhar Bharat is not just a policy vision; it’s a strategic necessity. It demands courage from policymakers, industry leaders, scientists, investors, and citizens. It demands bets that may look strange today but brilliant a few years from now.

11.    India’s Move 37 moment will not be a single decision. It will be a series of bold, well-calculated deviations from the comfort of the known — choices that redefine our economic and technological destiny.

If we choose boldly today, the next decade won’t just be another chapter of growth. It will be the decade where India rewrites the playbook.

Thursday, November 06, 2025

Breaking the Limits of Silicon: The Rise of Wafer-Scale Intelligence

1.    For half a century, computing has been built on the microchip , millions of tiny dies cut from a single silicon wafer, packaged, and wired together. But that paradigm is reaching its physical and economic limits.

2.    At the heart of this bottleneck lies the reticle limit ,the maximum area a lithography system can pattern, about 800 mm². It caps how big a single chip can be, forcing chipmakers like Nvidia to build massive data centers to connect thousands of smaller GPUs. The result: rising cost, energy use, and inefficiency.

3.    Wafer-Scale Integration (WSI) upends that model. Instead of slicing wafers into chips, the entire wafer becomes one giant processor — a seamless computing surface without boundaries. Companies like Cerebras Systems have already achieved this, building wafer-scale engines with trillions of transistors and orders-of-magnitude higher memory bandwidth.

4.    So why now? For decades, WSI was held back by impossible challenges — lithography limits, wafer defects, heat dissipation, and synchronization. Today, breakthroughs in fault-tolerant design, advanced cooling, and multi-beam e-beam lithography have finally cracked the code.

5.    The result is profound: entire data centers can shrink into something the size of a suitcase. The next leap in AI, energy, and defense won’t come from smaller chips — it will come from unified wafers.



6.    The shift from chips to wafers isn’t just another upgrade , it’s the beginning of computing’s post-silicon age.

Sunday, November 02, 2025

Scientists Turn Light into a Supersolid: A Quantum Leap for Computing

1.    For the first time ever, researchers have turned light into a “supersolid” — a strange state of matter that behaves like both a solid and a liquid at the same time. While supersolids have been made from atoms before, this is the first instance of coupling light and matter to create one.


What Is a Supersolid?

2.    A supersolid is a quantum state where particles form a regular, crystal-like structure (solid behavior) but can also flow without friction (liquid behavior). Think of ice that flows like water — that’s a rough analogy.

3.    Supersolids form at extremely low temperatures, close to absolute zero, because heat disrupts the delicate quantum interactions that allow them to exist. At these temperatures, particles settle into their lowest energy state, allowing researchers to observe quantum effects that are normally hidden.


How Do You Make Light Solid?

4.    Photons, the particles of light, normally don’t interact and can’t form a solid. Scientists overcame this by trapping photons inside a special material where they interact strongly with excitons — quasiparticles formed from an electron and a “hole” left behind when the electron moves.

Special Matter 

5.    The “special material” used to create the supersolid is a semiconductor structure, often made from Gallium Arsenide (GaAs), engineered with a photonic-crystal waveguide. This setup allows photons to strongly interact with excitons (electron-hole pairs) in the material, forming hybrid particles called polaritons. The semiconductor provides a solid framework, while the patterned waveguide guides the polaritons into an ordered, crystal-like structure. At the same time, these polaritons can flow freely without friction, giving the system its supersolid properties.]

6.    This interaction creates polaritons, hybrid particles that are part light, part matter. The excitons provide the “solid” framework, while the light contributes quantum behavior and flow. When cooled, polaritons condense into a Bose–Einstein condensate, forming a supersolid — a lattice that is ordered like a solid but can flow without friction. Essentially, photons get “anchored” to matter, allowing light to act like a crystal.


Why This Is Exciting

7.    Supersolids are more than a physics curiosity. They let us observe quantum interactions directly and could enable a new generation of technologies.

Potential applications include:

  • Quantum computing: Light-based supersolids could act as qubits, processing information faster and more efficiently.

  • Superconductors: Understanding frictionless flow could help create materials that conduct electricity without resistance.

  • Frictionless materials & sensors: Could lead to ultra-precise sensors or materials that move smoothly at the nanoscale.

  • Photonics & optical circuits: Using structured light for memory storage, quantum lasers, or light-based computing.

  • Fundamental physics: A playground to study quantum mechanics and simulate extreme cosmic conditions.

Quantum Information Storage

8.    Supersolids of light could act as a platform for storing and processing quantum information. The hybrid light-matter particles (polaritons) can occupy stable quantum states in the ordered lattice, effectively encoding information. Because they can flow without friction, these states are coherent and long-lived, making them ideal for qubits in future light-based quantum computers. This opens the possibility of faster, more energy-efficient quantum computation using photons instead of conventional electronics.


The Bottom Line

9.    Turning light into a supersolid is a milestone in quantum physics, bridging light and matter in a way never seen before. By coupling photons with excitons in a solid-like framework, scientists have created a crystal of light that flows like a liquid.

10.    While practical applications are still emerging, this discovery could pave the way for quantum computers, advanced materials, and entirely new technologies based on the behavior of light itself.

11.    The future may include computers, sensors, and circuits made not from silicon, but from “frozen light.”

Powered By Blogger